WIP Patch for GROUPING SETS phase 1

Started by Atri Sharmaover 11 years ago136 messages
#1Atri Sharma
atri.jiit@gmail.com
1 attachment(s)

This is phase 1 (of either 2 or 3) of implementation of the standard
GROUPING SETS feature, done by Andrew Gierth and myself.

Unlike previous attempts at this feature, we make no attempt to do
any serious work in the parser; we perform some minor syntactic
simplifications described in the spec, such as removing excess parens,
but the original query structure is preserved in views and so on.

So far, we have done most of the actual work in the executor, but
further phases will concentrate on the planner. We have not yet
tackled the hard problem of generating plans that require multiple
passes over the same input data; see below regarding design issues.

What works so far:

- all the standard syntax is accepted (but many combinations are not
plannable yet)

- while the spec only allows column references in GROUP BY, we
continue to allow arbitrary expressions

- grouping sets which can be computed in a single pass over sorted
data (i.e. anything that can be reduced to simple columns plus one
ROLLUP clause, regardless of how it was specified in the query), are
implemented as part of the existing GroupAggregate executor node

- all kinds of aggregate functions, including ordered set functions
and user-defined aggregates, are supported in conjunction with
grouping sets (no API changes, other than one caveat about fn_extra)

- the GROUPING() operation defined in the spec is implemented,
including support for multiple args, and supports arbitrary
expressions as an extension to the spec

Changes/incompatibilities:

- the big compatibility issue: CUBE and ROLLUP are now partially
reserved (col_name_keyword), which breaks contrib/cube. A separate
patch for contrib/ is attached that renames the cube type to "cube"; a
new name really needs to be chosen.

- GROUPING is now a fully reserved word, and SETS is an unreserved keyword

- GROUP BY (a,b) now means GROUP BY a,b (as required by spec).
GROUP BY ROW(a,b) still has the old meaning.

- GROUP BY () is now supported too.

- fn_extra for aggregate calls is per-call-site and NOT
per-transition-value - the same fn_extra will be used for interleaved
calls to the transition function with different transition values.
fn_extra, if used at all, should be used only for per-call-site info
such as data types, as clarified in the 9.4beta changes to the ordered
set function implementation.

Future work:

We envisage that handling of arbitrary grouping sets will be best
done by having the planner generating an Append of multiple
aggregation paths, presumably with some way of moving the original
input path to a CTE. We have not really explored yet how hard this
will be; suggestions are welcome.

In the executor, it is obviously possible to extend HashAggregate to
handle arbitrary collections of grouping sets, but even if the memory
usage issue were solved, this would leave the question of what to do
with non-hashable data types, so it seems that the planner work
probably can't be avoided.

A new name needs to be found for the "cube" data type.

At this point we are more interested in design review rather than
necessarily committing this patch in its current state. However,
committing it may make future work easier; we leave that question
open.

Regards,

Atri

Attachments:

groupingsets_ver1.patchtext/x-diff; charset=US-ASCII; name=groupingsets_ver1.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 781a736..479ae7e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -78,6 +78,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1778,17 +1781,80 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	List	   *result = NIL;
+	bool		useprefix;
+	char	   *exprstr;
+	StringInfoData buf;
+	ListCell   *lc;
+	ListCell   *lc2;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	foreach(lc, gsets)
+	{
+		char *sep = "";
+
+		initStringInfo(&buf);
+		appendStringInfoString(&buf, "(");
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			appendStringInfoString(&buf, sep);
+			appendStringInfoString(&buf, exprstr);
+			sep = ", ";
+		}
+
+		appendStringInfoString(&buf, ")");
+
+		result = lappend(result, buf.data);
+	}
+
+	ExplainPropertyList(qlabel, result, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 7cfa63f..ed4b241 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,8 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingExpr(GroupingState *gstate, ExprContext *econtext,
+								  bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -568,6 +572,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -645,8 +651,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -694,6 +716,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -2987,6 +3034,40 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingExpr
+ * Return a bitmask with a bit for each column.
+ * A bit is set if the column is not a part of grouping.
+ */
+
+static Datum
+ExecEvalGroupingExpr(GroupingState *gstate,
+					 ExprContext *econtext,
+					 bool *isNull,
+					 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int current_val= 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		current_val = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(current_val, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4385,6 +4466,44 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
+		case T_Grouping:
+			{
+				Grouping *grp_node = (Grouping *) node;
+				GroupingState *grp_state = makeNode(GroupingState);
+				List     *result_list = NIL;
+				ListCell *lc;
+				Agg *agg = NULL;
+
+				if (parent != NULL)
+					if (!(IsA((parent->plan), Agg)))
+						elog(ERROR, "Parent is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+				{
+					foreach(lc, (grp_node->refs))
+					{
+						int current_index = lfirst_int(lc);
+						int result = 0;
+
+						result = agg->grpColIdx[current_index];
+
+						result_list = lappend_int(result_list, result);
+					}
+				}
+
+				grp_state->clauses = result_list;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingExpr;
+			}
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 510d1c5..7b67797 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
 	 * rest.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +304,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReinitialize);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReinitialize)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         i = 0;
+
+	if (numReinitialize < 1)
+		numReinitialize = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (i = 0; i < numReinitialize; i++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstate[i])
+					tuplesort_end(peraggstate->sortstate[i]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstate[i] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
+		/* If ROLLUP is present, we need to iterate over all the groups
+		 * that are present with the current aggstate. If ROLLUP is not
+		 * present, we only have one groupstate associated with the
+		 * current aggstate.
 		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+
+		for (i = 0; i < numReinitialize; i++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (i * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +476,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +524,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +551,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         groupno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +601,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstate[groupno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstate[groupno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +626,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (groupno * numAggs)];
+
+				aggstate->current_set = groupno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +656,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +678,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +690,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstate[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +734,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +745,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +764,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstate[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,8 +818,8 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -832,7 +871,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +955,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, Grouping))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +986,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontext[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1097,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1131,7 +1171,13 @@ agg_retrieve_direct(AggState *aggstate)
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	int			   aggno;
+	bool           hasRollup = aggstate->numsets > 0;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentGroup = 0;
+	int            currentSize = 0;
+	int            numReset = 1;
+	int            i;
 
 	/*
 	 * get state info from node
@@ -1150,131 +1196,233 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
 		 * aggregate functions store working state in such contexts.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
+		 * any registered shutdown callbacks to be called.	That allows
 		 * aggregate functions to ensure they've cleaned up any non-memory
 		 * resources.
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
 
-		/*
-		 * Initialize working state for a new input tuple group
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontext[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+		}
+
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
+
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			currentSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			currentSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& currentSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									currentSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			++aggstate->projected_set;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(currentSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * we no longer care what group we just projected, the next projection
+			 * will always be the first (or only) grouping set (unless the input
+			 * proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch it
+			 * from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+				else
+				{
+					/* outer plan produced no tuples at all */
+					if (hasRollup)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasRollup)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentGroup = aggstate->projected_set;
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentGroup * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1292,6 +1440,9 @@ agg_retrieve_direct(AggState *aggstate)
 							   &aggvalues[aggno], &aggnulls[aggno]);
 		}
 
+		if (hasRollup)
+			econtext->grouped_cols = aggstate->grouped_cols[currentGroup];
+
 		/*
 		 * Check the qual (HAVING clause); if the group does not match, ignore
 		 * it and loop back to try to process another group.
@@ -1306,6 +1457,7 @@ agg_retrieve_direct(AggState *aggstate)
 			ExprDoneCond isDone;
 
 			result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+			slot_getallattrs(result);
 
 			if (isDone != ExprEndResult)
 			{
@@ -1495,6 +1647,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int        numGroupingSets = 1;
+	int        currentsortno = 0;
+	int        i = 0;
+	int        j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1664,69 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
 
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontext = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
+
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts replaces the standalone memory context
+	 * formerly used to hold transition values.  We cheat a little by using
+	 * ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontext and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontext[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1645,7 +1832,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +1896,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstate = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstate[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2207,35 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			i = 0;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (i = 0; i < numGroupingSets; i++)
+		{
+			if (peraggstate->sortstate[i])
+				tuplesort_end(peraggstate->sortstate[i]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (i = 0; i < numGroupingSets; ++i)
+		ReScanExprContext(node->aggcontext[i]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2244,17 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         groupno;
+	int         i;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2280,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (groupno = 0; groupno < numGroupingSets; groupno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstate[groupno])
+			{
+				tuplesort_end(peraggstate->sortstate[groupno]);
+				peraggstate->sortstate[groupno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ReScanExprContext(node->aggcontext[i]);
+		MemoryContextDeleteChildren(node->aggcontext[i]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2101,16 +2321,7 @@ ExecReScanAgg(AggState *node)
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,7 +2333,7 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
 	}
 
 	/*
@@ -2150,8 +2361,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2373,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2461,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 3088578..6757763 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -779,6 +779,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1065,6 +1066,58 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGrouping
+ */
+static Grouping *
+_copyGrouping(const Grouping *from)
+{
+	Grouping		   *newnode = makeNode(Grouping);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_LOCATION_FIELD(location);
+	COPY_SCALAR_FIELD(agglevelsup);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupingSet
+ */
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -2495,6 +2548,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4079,6 +4133,15 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
+		case T_Grouping:
+			retval = _copyGrouping(from);
+			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 1b07db6..59ce09d 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,52 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGrouping(const Grouping *a, const Grouping *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * Special-case the refs field: we might compare nodes where one has been
+	 * filled in and the other has not yet.  (But out of sheer paranoia, if
+	 * both are filled in, compare them.)
+	 */
+
+	if (a->refs != NIL && b->refs != NIL)
+		COMPARE_NODE_FIELD(refs);
+
+	COMPARE_LOCATION_FIELD(location);
+	COMPARE_SCALAR_FIELD(agglevelsup);
+
+	return true;
+}
+
+static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -864,6 +910,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2556,6 +2603,15 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
+		case T_Grouping:
+			retval = _equalGrouping(a, b);
+			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 5c09d2f..f878d1f 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index da59c58..e930cef 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 41e973b..6a63d1b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,12 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_Grouping:
+			type = INT4OID;
+			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -261,6 +267,10 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_Grouping:
+			return -1;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +744,12 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_Grouping:
+			coll = InvalidOid;
+			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -967,6 +983,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -1003,6 +1022,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_BoolExpr:
 			Assert(!OidIsValid(collation));		/* result is always boolean */
 			break;
+		case T_Grouping:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_SubLink:
 #ifdef USE_ASSERT_CHECKING
 			{
@@ -1182,6 +1204,15 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_Grouping:
+			loc = ((const Grouping *) expr)->location;
+			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1622,6 +1653,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1687,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_Grouping:
+			{
+				Grouping   *grouping = (Grouping *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2185,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2162,6 +2212,17 @@ expression_tree_mutator(Node *node,
 		case T_RangeTblRef:
 		case T_SortGroupClause:
 			return (Node *) copyObject(node);
+		case T_Grouping:
+			{
+				Grouping	   *grouping = (Grouping *) node;
+				Grouping	   *newnode;
+
+				FLATCOPY(newnode, grouping, Grouping);
+				MUTATE(newnode->args, grouping->args, List *);
+				/* assume no need to copy or mutate the refs list */
+				return (Node *) newnode;
+			}
+			break;
 		case T_WithCheckOption:
 			{
 				WithCheckOption *wco = (WithCheckOption *) node;
@@ -3209,6 +3270,8 @@ raw_expression_tree_walker(Node *node,
 			return walker(((WithClause *) node)->ctes, context);
 		case T_CommonTableExpr:
 			return walker(((CommonTableExpr *) node)->ctequery, context);
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) nodeTag(node));
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index e686a6c..64a888e 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -643,6 +643,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -912,6 +914,43 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGrouping(StringInfo str, const Grouping *node)
+{
+	WRITE_NODE_TYPE("GROUPING");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_LOCATION_FIELD(location);
+	WRITE_INT_FIELD(agglevelsup);
+}
+
+static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -2270,6 +2309,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2914,6 +2954,15 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
+			case T_Grouping:
+				_outGrouping(str, obj);
+				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 69d9989..3a55154 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -215,6 +215,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -439,6 +440,52 @@ _readVar(void)
 	READ_DONE();
 }
 
+static Grouping *
+_readGrouping(void)
+{
+	READ_LOCALS(Grouping);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_LOCATION_FIELD(location);
+	READ_INT_FIELD(agglevelsup);
+
+	READ_DONE();
+}
+
+/*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
 /*
  * _readConst
  */
@@ -1320,6 +1367,12 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
+	else if (MATCH("GROUPING", 8))
+		return_value = _readGrouping();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index c81efe9..a16df6f 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1231,6 +1231,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2104,7 +2105,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 773f8a4..e8b6671 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -580,6 +580,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -648,10 +649,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -667,6 +668,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 4b641a2..1a47f0f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1015,6 +1015,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4265,6 +4266,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4294,10 +4296,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1480cd..9b4722d 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -22,6 +22,7 @@
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +38,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -77,7 +79,10 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static void preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
+static void fixup_grouping_exprs(Node *clause, int *refmap);
+static bool fixup_grouping_exprs_walker(Node *clause, int *refmap);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -531,7 +536,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1187,15 +1193,81 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int		   *refmap = NULL;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
-		if (parse->groupClause)
-			preprocess_groupclause(root);
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets);
+
+		elog(DEBUG1, "grouping sets 1: %s", nodeToString(parse->groupingSets));
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			int			maxref = 0;
+			int			ref = 0;
+			List	   *remaining_sets = NIL;
+			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
+														  parse->sortClause,
+														  &remaining_sets);
+
+			/*
+			 * TODO - if the grouping set list can't be handled as one rollup...
+			 */
+
+			if (remaining_sets != NIL)
+				elog(ERROR, "not implemented yet");
+
+			parse->groupingSets = usable_sets;
+
+			if (parse->groupClause)
+				preprocess_groupclause(root, linitial(parse->groupingSets));
+
+			/*
+			 * Now that we've pinned down an order for the groupClause for this
+			 * list of grouping sets, remap the entries in the grouping sets
+			 * from sortgrouprefs to plain indices into the groupClause.
+			 */
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+
+			refmap = palloc0(sizeof(int) * (maxref + 1));
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				refmap[gc->tleSortGroupRef] = ++ref;
+			}
+
+			foreach(lc, usable_sets)
+			{
+				foreach(lc2, (List *) lfirst(lc))
+				{
+					Assert(refmap[lfirst_int(lc2)] > 0);
+					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+				}
+			}
+
+			elog(DEBUG1, "grouping sets 2: %s", nodeToString(parse->groupingSets));
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				preprocess_groupclause(root, NIL);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1241,6 +1313,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		if (parse->hasAggs)
 		{
 			/*
+			 * Fix up any GROUPING nodes to refer to indexes in the final
+			 * groupClause list.
+			 */
+			fixup_grouping_exprs((Node *) tlist, refmap);
+			fixup_grouping_exprs(parse->havingQual, refmap);
+
+			/*
 			 * Collect statistics about aggregates for estimating costs. Note:
 			 * we do not attempt to detect duplicate aggregates here; a
 			 * somewhat-overestimated cost is okay for our present purposes.
@@ -1257,6 +1336,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
+		if (refmap)
+			pfree(refmap);
+
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1267,6 +1349,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1312,7 +1395,23 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc;
+
+				dNumGroups = 0;
+
+				foreach(lc, parse->groupingSets)
+				{
+					dNumGroups += estimate_num_groups(root,
+													  groupExprs,
+													  path_rows,
+													  (List **) &(lfirst(lc)));
+				}
+			}
+			else
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1338,7 +1437,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
@@ -1360,7 +1459,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1443,13 +1542,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1591,12 +1701,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || parse->groupingSets)
 			{
 				/* Plain aggregate plan --- sort if needed */
 				AggStrategy aggstrategy;
@@ -1622,7 +1733,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				else
 				{
 					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
+					/* Result will have no sort order */
 					current_pathkeys = NIL;
 				}
 
@@ -1634,6 +1745,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												parse->groupingSets,
 												numGroups,
 												result_plan);
 			}
@@ -1849,7 +1961,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1890,6 +2002,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2508,6 +2621,7 @@ limit_needed(Query *parse)
 }
 
 
+
 /*
  * preprocess_groupclause - do preparatory work on GROUP BY clause
  *
@@ -2525,14 +2639,30 @@ limit_needed(Query *parse)
  * the parser already enforced that that matches ORDER BY.
  */
 static void
-preprocess_groupclause(PlannerInfo *root)
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we may need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		Assert(list_length(parse->groupClause) == list_length(new_groupclause));
+		parse->groupClause = new_groupclause;
+		return;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
 		return;
@@ -2543,7 +2673,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2595,6 +2724,145 @@ preprocess_groupclause(PlannerInfo *root)
 	parse->groupClause = new_groupclause;
 }
 
+
+/*
+ * Extract a list of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass. The order of elements in each returned set is
+ * modified to ensure proper prefix relationships; the sets are returned in
+ * decreasing order of size. (The input must also be in descending order of
+ * size.)
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ *
+ * Sets that can't be accomodated within a rollup that includes the first
+ * (and therefore largest) grouping set in the input are added to the
+ * remainder list.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = linitial(groupingSets);
+	List	   *tmp_result = list_make1(previous);
+	List	   *result = NIL;
+
+	for_each_cell(lc, lnext(list_head(groupingSets)))
+	{
+		List   *candidate = lfirst(lc);
+		bool	ok = true;
+
+		foreach(lc2, candidate)
+		{
+			int ref = lfirst_int(lc2);
+			if (!list_member_int(previous, ref))
+			{
+				ok = false;
+				break;
+			}
+		}
+
+		if (ok)
+		{
+			tmp_result = lcons(candidate, tmp_result);
+			previous = candidate;
+		}
+		else
+			*remainder = lappend(*remainder, candidate);
+	}
+
+	/*
+	 * reorder the list elements so that shorter sets are strict
+	 * prefixes of longer ones, and if we ever have a choice, try
+	 * and follow the sortclause if there is one. (We're trying
+	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
+	 * gets implemented in one pass.)
+	 */
+
+	previous = NIL;
+
+	foreach(lc, tmp_result)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+	list_free(tmp_result);
+
+	return result;
+}
+
+
+static void
+fixup_grouping_exprs(Node *clause, int *refmap)
+{
+	(void) fixup_grouping_exprs_walker(clause, refmap);
+}
+
+static bool
+fixup_grouping_exprs_walker(Node *node, int *refmap)
+{
+	if (node == NULL)
+		return false;
+	if (IsA(node, Grouping))
+	{
+		Grouping *g = (Grouping *) node;
+
+		/* If there are no grouping sets, we don't need this. */
+		if (!refmap)
+		{
+			g->refs = NIL;
+		}
+		else
+		{
+			ListCell *lc;
+
+			foreach(lc, g->refs)
+			{
+				Assert(refmap[lfirst_int(lc)] > 0);
+				lfirst_int(lc) = refmap[lfirst_int(lc)] - 1;
+			}
+		}
+
+		/* No need to recurse into args. */
+		return false;
+	}
+	Assert(!IsA(node, SubLink));
+	return expression_tree_walker(node, fixup_grouping_exprs_walker,
+								  (void *) refmap);
+}
+
+
 /*
  * Compute query_pathkeys and other pathkeys during plan generation
  */
@@ -3040,7 +3308,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 4d717df..ddec675 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -647,6 +655,9 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			set_group_vars(root, (Agg *) plan);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1246,6 +1257,67 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	int i;
+	Bitmapset *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	context.root = root;
+
+	for (i = 0; i < agg->numCols; ++i)
+		cols = bms_add_member(cols, agg->grpColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref) || IsA(node,Grouping))
+	{
+		/*
+		 * don't recurse into Aggrefs, since they see the values prior
+		 * to grouping.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..e0a2ca7 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given Grouping expression
+ * which is expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, Grouping *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the Grouping belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (Grouping *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,13 +1532,14 @@ simplify_EXISTS_query(Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR UPDATE/SHARE;
-	 * none of these seem likely in normal usage and their possible effects
-	 * are complex.
+	 * aggregates, grouping sets, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1813,6 +1856,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (Grouping *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 9cb1378..cb8aeb6 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..3c71d7f 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 19b5cf7..1152195 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4294,6 +4294,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 319e8b2..a7bbacf 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index b5c6a44..efed20a 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index d4f46b8..c8a7b43 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,28 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+			case PVC_RECURSE_AGGREGATES:
+				/* We don't include the Grouping node in the result */
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..96ef36c 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -968,6 +968,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1014,7 +1015,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1474,7 +1475,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a113809..675f0a0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -361,6 +361,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list grouping_set_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause grouping_set
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -425,7 +429,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -547,7 +551,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -562,7 +566,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -596,11 +600,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -9832,11 +9836,86 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' grouping_set_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
+grouping_set:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+grouping_set_list:
+			grouping_set 							{ $$ = list_make1($1); }
+			| grouping_set_list ',' grouping_set	{ $$ = lappend($1,$3); }
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11415,15 +11494,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  Grouping *g = makeNode(Grouping);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12173,6 +12270,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13071,6 +13175,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13147,6 +13252,7 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
+			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
@@ -13168,6 +13274,7 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
+			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
@@ -13269,6 +13376,7 @@ reserved_keyword:
 			| FROM
 			| GRANT
 			| GROUP_P
+			| GROUPING
 			| HAVING
 			| IN_P
 			| INITIALLY
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index c984b7d..15a06df 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,248 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
+
+
+static void check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NULL;
+	List	   *args = NULL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = false;
+
+	if (IsA(expr, Aggref))
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+
+		location = agg->location;
+
+		isAgg = true;
+	}
+	else if (IsA(expr, Grouping))
+	{
+		Grouping *grp = (Grouping *) expr;
+
+		args = grp->args;
+
+		location = grp->location;
+	}
+
+	/*
+	 * Check the arguments to compute the aggregate's level and detect
+	 * improper nesting.
+	 */
+	min_varlevel = check_agg_arguments(pstate,
+									   directargs,
+									   args,
+									   filter);
+
+	if (IsA(expr, Aggref))
+		((Aggref *) expr)->agglevelsup = min_varlevel;
+	else if (IsA(expr, Grouping))
+		((Grouping *) expr)->agglevelsup = min_varlevel;
+
+	/* Mark the correct pstate level as having aggregates */
+	while (min_varlevel-- > 0)
+		pstate = pstate->parentParseState;
+	pstate->p_hasAggs = true;
+
+	/*
+	 * Check to see if the aggregate function is in an invalid place within
+	 * its aggregation query.
+	 *
+	 * For brevity we support two schemes for reporting an error here: set
+	 * "err" to a custom message, or set "errkind" true if the error context
+	 * is sufficiently identified by what ParseExprKindName will return, *and*
+	 * what it will return is just a SQL keyword.  (Otherwise, use a custom
+	 * message to avoid creating translation problems.)
+	 */
+	err = NULL;
+	errkind = false;
+	switch (pstate->p_expr_kind)
+	{
+		case EXPR_KIND_NONE:
+			Assert(false);		/* can't happen */
+			break;
+		case EXPR_KIND_OTHER:
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
+			break;
+		case EXPR_KIND_JOIN_ON:
+		case EXPR_KIND_JOIN_USING:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("Grouping is not allowed in JOIN conditions");
+
+			break;
+		case EXPR_KIND_FROM_SUBSELECT:
+			/* Should only be possible in a LATERAL subquery */
+			Assert(pstate->p_lateral_active);
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("Grouping is not allowed in FROM clause of its own query level");
+
+			break;
+		case EXPR_KIND_FROM_FUNCTION:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("Grouping is not allowed in functions in FROM");
+
+			break;
+		case EXPR_KIND_WHERE:
+			errkind = true;
+			break;
+		case EXPR_KIND_HAVING:
+			/* okay */
+			break;
+		case EXPR_KIND_FILTER:
+			errkind = true;
+			break;
+		case EXPR_KIND_WINDOW_PARTITION:
+			/* okay */
+			break;
+		case EXPR_KIND_WINDOW_ORDER:
+			/* okay */
+			break;
+		case EXPR_KIND_WINDOW_FRAME_RANGE:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("Grouping is not allowed in window RANGE");
 
+			break;
+		case EXPR_KIND_WINDOW_FRAME_ROWS:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("Grouping is not allowed in window ROWS");
+
+			break;
+		case EXPR_KIND_SELECT_TARGET:
+			/* okay */
+			break;
+		case EXPR_KIND_INSERT_TARGET:
+		case EXPR_KIND_UPDATE_SOURCE:
+		case EXPR_KIND_UPDATE_TARGET:
+			errkind = true;
+			break;
+		case EXPR_KIND_GROUP_BY:
+			errkind = true;
+			break;
+		case EXPR_KIND_ORDER_BY:
+			/* okay */
+			break;
+		case EXPR_KIND_DISTINCT_ON:
+			/* okay */
+			break;
+		case EXPR_KIND_LIMIT:
+		case EXPR_KIND_OFFSET:
+			errkind = true;
+			break;
+		case EXPR_KIND_RETURNING:
+			errkind = true;
+			break;
+		case EXPR_KIND_VALUES:
+			errkind = true;
+			break;
+		case EXPR_KIND_CHECK_CONSTRAINT:
+		case EXPR_KIND_DOMAIN_CHECK:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("Grouping is not allowed in check constraints");
+
+			break;
+		case EXPR_KIND_COLUMN_DEFAULT:
+		case EXPR_KIND_FUNCTION_DEFAULT:
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("Grouping is not allowed in DEFAULT expressions");
+
+			break;
+		case EXPR_KIND_INDEX_EXPRESSION:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("Grouping is not allowed in index expressions");
+
+			break;
+		case EXPR_KIND_INDEX_PREDICATE:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("Grouping is not allowed in index predicates");
+
+			break;
+		case EXPR_KIND_ALTER_COL_TRANSFORM:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("Grouping is not allowed in transform expressions");
+
+			break;
+		case EXPR_KIND_EXECUTE_PARAMETER:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("Grouping is not allowed in EXECUTE parameters");
+
+			break;
+		case EXPR_KIND_TRIGGER_WHEN:
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("Grouping is not allowed in trigger WHEN conditions");
+
+			break;
+
+			/*
+			 * There is intentionally no default: case here, so that the
+			 * compiler will warn if we add a new ParseExprKind without
+			 * extending this switch.  If we do see an unrecognized value at
+			 * runtime, the behavior will be the same as for EXPR_KIND_OTHER,
+			 * which is sane anyway.
+			 */
+	}
+
+	if (err)
+		ereport(ERROR,
+				(errcode(ERRCODE_GROUPING_ERROR),
+				 errmsg_internal("%s", err),
+				 parser_errposition(pstate, location)));
+
+	if (errkind)
+		ereport(ERROR,
+				(errcode(ERRCODE_GROUPING_ERROR),
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg("aggregate functions are not allowed in %s",
+						ParseExprKindName(pstate->p_expr_kind)),
+				 parser_errposition(pstate, location)));
+}
 
 /*
  * transformAggregateCall -
@@ -96,10 +335,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,148 +450,44 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
-	/*
-	 * Check the arguments to compute the aggregate's level and detect
-	 * improper nesting.
-	 */
-	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
 
-	/* Mark the correct pstate level as having aggregates */
-	while (min_varlevel-- > 0)
-		pstate = pstate->parentParseState;
-	pstate->p_hasAggs = true;
+/* transformGroupingExpr
+ * Transform a grouping expression
+ */
+Node *
+transformGroupingExpr(ParseState *pstate, Grouping *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	Grouping   *result = makeNode(Grouping);
 
-	/*
-	 * Check to see if the aggregate function is in an invalid place within
-	 * its aggregation query.
-	 *
-	 * For brevity we support two schemes for reporting an error here: set
-	 * "err" to a custom message, or set "errkind" true if the error context
-	 * is sufficiently identified by what ParseExprKindName will return, *and*
-	 * what it will return is just a SQL keyword.  (Otherwise, use a custom
-	 * message to avoid creating translation problems.)
-	 */
-	err = NULL;
-	errkind = false;
-	switch (pstate->p_expr_kind)
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
 	{
-		case EXPR_KIND_NONE:
-			Assert(false);		/* can't happen */
-			break;
-		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
-			break;
-		case EXPR_KIND_JOIN_ON:
-		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
-			break;
-		case EXPR_KIND_FROM_SUBSELECT:
-			/* Should only be possible in a LATERAL subquery */
-			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
-			break;
-		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
-			break;
-		case EXPR_KIND_WHERE:
-			errkind = true;
-			break;
-		case EXPR_KIND_HAVING:
-			/* okay */
-			break;
-		case EXPR_KIND_FILTER:
-			errkind = true;
-			break;
-		case EXPR_KIND_WINDOW_PARTITION:
-			/* okay */
-			break;
-		case EXPR_KIND_WINDOW_ORDER:
-			/* okay */
-			break;
-		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
-			break;
-		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
-			break;
-		case EXPR_KIND_SELECT_TARGET:
-			/* okay */
-			break;
-		case EXPR_KIND_INSERT_TARGET:
-		case EXPR_KIND_UPDATE_SOURCE:
-		case EXPR_KIND_UPDATE_TARGET:
-			errkind = true;
-			break;
-		case EXPR_KIND_GROUP_BY:
-			errkind = true;
-			break;
-		case EXPR_KIND_ORDER_BY:
-			/* okay */
-			break;
-		case EXPR_KIND_DISTINCT_ON:
-			/* okay */
-			break;
-		case EXPR_KIND_LIMIT:
-		case EXPR_KIND_OFFSET:
-			errkind = true;
-			break;
-		case EXPR_KIND_RETURNING:
-			errkind = true;
-			break;
-		case EXPR_KIND_VALUES:
-			errkind = true;
-			break;
-		case EXPR_KIND_CHECK_CONSTRAINT:
-		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
-			break;
-		case EXPR_KIND_COLUMN_DEFAULT:
-		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
-			break;
-		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
-			break;
-		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
-			break;
-		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
-			break;
-		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
-			break;
-		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
-			break;
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
 
-			/*
-			 * There is intentionally no default: case here, so that the
-			 * compiler will warn if we add a new ParseExprKind without
-			 * extending this switch.  If we do see an unrecognized value at
-			 * runtime, the behavior will be the same as for EXPR_KIND_OTHER,
-			 * which is sane anyway.
-			 */
+		result_list = lappend(result_list, current_result);
 	}
-	if (err)
-		ereport(ERROR,
-				(errcode(ERRCODE_GROUPING_ERROR),
-				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
-	if (errkind)
-		ereport(ERROR,
-				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
-						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
-}
 
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
 /*
  * check_agg_arguments
  *	  Scan the arguments of an aggregate function to determine the
@@ -527,6 +659,23 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
+	if (IsA(node, Grouping))
+	{
+		int			agglevelsup = ((Grouping *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +919,41 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of
+	 * all sets (which will often be empty, so help things along by
+	 * seeding the intersect with the smallest set).
+	 */
+	if (qry->groupingSets)
+	{
+		List *gsets = expand_grouping_sets(qry->groupingSets);
+
+		gset_common = llast(gsets);
+
+		if (gset_common)
+		{
+			foreach(l, gsets)
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +973,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1007,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseVars = lappend(groupClauseVars, tle->expr);
 		}
 	}
 
@@ -857,17 +1041,25 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * grouping expressions themselves --- but they'll all pass the test ...
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1096,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseVars = groupClauseVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1160,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/* we handled Grouping separately, no need to recheck at this level. */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1181,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1210,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1249,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1294,384 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/*
+		 * We only need to check Grouping nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, (grp->args))
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate, grp->location)));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result = list_concat(current_result,
+													 list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result = list_concat(current_result,
+														 list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_desc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result, list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 4931dca..f53e452 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -1663,40 +1663,160 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
 
-	foreach(gl, grouplist)
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1828,257 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 16)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 16 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2183,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..740ae3a 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -166,6 +167,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 										InvalidOid, InvalidOid, -1);
 			break;
 
+		case T_Grouping:
+			result = transformGroupingExpr(pstate, (Grouping *) expr);
+			break;
+
 		case T_TypeCast:
 			{
 				TypeCast   *tc = (TypeCast *) expr;
@@ -1483,6 +1488,8 @@ transformCaseExpr(ParseState *pstate, CaseExpr *c)
 	return (Node *) newc;
 }
 
+
+
 static Node *
 transformSubLink(ParseState *pstate, SubLink *sublink)
 {
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 328e0c6..1e48346 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1628,6 +1628,9 @@ FigureColnameInternal(Node *node, char **name)
 				}
 			}
 			break;
+		case T_Grouping:
+			*name = "grouping";
+			return 2;
 		case T_A_Indirection:
 			{
 				A_Indirection *ind = (A_Indirection *) node;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index e6c5530..5c4e201 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2063,7 +2063,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index fb20314..dd939cd 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -104,6 +104,12 @@ contain_aggs_of_level_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up)
+			return true;
+	}
+
 	return expression_tree_walker(node, contain_aggs_of_level_walker,
 								  (void *) context);
 }
@@ -169,6 +175,16 @@ locate_agg_of_level_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up &&
+			((Grouping *) node)->location >= 0)
+		{
+			context->agg_location = ((Aggref *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
+
 	return expression_tree_walker(node, locate_agg_of_level_walker,
 								  (void *) context);
 }
@@ -705,6 +721,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		Grouping	   *grp = (Grouping *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..a598470 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -360,9 +360,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -4535,7 +4537,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4560,19 +4562,35 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
 	}
 
@@ -4640,7 +4658,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4859,14 +4877,14 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
@@ -4891,6 +4909,66 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4910,7 +4988,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5010,7 +5088,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -6684,6 +6762,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -7580,6 +7662,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			}
 			break;
 
+		case T_Grouping:
+			{
+				Grouping *gexpr = (Grouping *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_List:
 			{
 				char	   *sep;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index e932ccf..c769e83 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..ee1fe74 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -911,6 +913,16 @@ typedef struct MinMaxExprState
 } MinMaxExprState;
 
 /* ----------------
+ *		GroupingState node
+ * ----------------
+ */
+typedef struct GroupingState
+{
+	ExprState	xprstate;
+	List        *clauses;
+} GroupingState;
+
+/* ----------------
  *		XmlExprState node
  * ----------------
  */
@@ -1701,19 +1713,26 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontext;	/* econtexts for long-lived data */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index e108b85..bd3b2a5 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 067c768..a753809 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -115,6 +115,7 @@ typedef enum NodeTag
 	T_SortState,
 	T_GroupState,
 	T_AggState,
+	T_GroupingState,
 	T_WindowAggState,
 	T_UniqueState,
 	T_HashState,
@@ -171,6 +172,9 @@ typedef enum NodeTag
 	T_JoinExpr,
 	T_FromExpr,
 	T_IntoClause,
+	T_GroupedVar,
+	T_Grouping,
+	T_GroupingSet,
 
 	/*
 	 * TAGS FOR EXPRESSION STATE NODES (execnodes.h)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 8364bef..da33155 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -134,6 +134,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of grouping sets if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index c545115..45eacda 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 3b9c683..077ae9f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -631,6 +631,7 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 6d9f3d9..4dd775e 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -159,6 +159,26 @@ typedef struct Var
 	int			location;		/* token location, or -1 if unknown */
 } Var;
 
+/* GroupedVar - expression node representing a grouping set variable.
+ * This is identical to Var node. It is a logical representation of
+ * a grouping set column and is also used during projection of rows
+ * in execution of a query having grouping sets.
+ */
+
+typedef Var GroupedVar;
+
+/*
+ * Grouping
+ */
+typedef struct Grouping
+{
+	Expr xpr;
+	List *args;
+	List *refs;
+	int location;
+	int agglevelsup;
+} Grouping;
+
 /*
  * Const
  */
@@ -1147,6 +1167,32 @@ typedef struct CurrentOfExpr
 	int			cursor_param;	/* refcursor parameter number, or 0 */
 } CurrentOfExpr;
 
+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
 /*--------------------
  * TargetEntry -
  *	   a target entry (used in query target lists)
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 4504250..64f3aa3 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 1ebb635..c8b1c93 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index b52e507..98dcea7 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, RESERVED_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -321,6 +323,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -339,6 +342,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 3f55ec7..711755b 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingExpr(ParseState *pstate, Grouping *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index e9e7cdc..58d88f0 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index 0f662ec..9d9c9b3 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..bfbceb8
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,265 @@
+select a, b from (values (1,2)) v(a,b) group by rollup (a,b);
+ a | b 
+---+---
+ 1 | 2
+ 1 |  
+   |  
+(3 rows)
+
+select a, sum(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+ a | sum 
+---+-----
+ 1 |  30
+ 2 |  40
+   |  70
+(3 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b); 
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select (select grouping(a,b) from (values (1)) v2(b)) from (values (1)) v1(a) group by a;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from (values (1)) v2(b)) from (...
+                       ^
+select grouping(p), percentile_disc(p) within group (order by x::float8), array_agg(p)
+from generate_series(1,5) x,
+     (values (0::float8),(0.1),(0.25),(0.4),(0.5),(0.6),(0.75),(0.9),(1)) v(p)
+group by rollup (p) order by p;
+ grouping | percentile_disc |                                                                                  array_agg                                                                                  
+----------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+        0 |               1 | {0,0,0,0,0}
+        0 |               1 | {0.1,0.1,0.1,0.1,0.1}
+        0 |               2 | {0.25,0.25,0.25,0.25,0.25}
+        0 |               2 | {0.4,0.4,0.4,0.4,0.4}
+        0 |               3 | {0.5,0.5,0.5,0.5,0.5}
+        0 |               3 | {0.6,0.6,0.6,0.6,0.6}
+        0 |               4 | {0.75,0.75,0.75,0.75,0.75}
+        0 |               5 | {0.9,0.9,0.9,0.9,0.9}
+        0 |               5 | {1,1,1,1,1}
+        1 |               5 | {0,0,0,0,0,0.1,0.1,0.1,0.1,0.1,0.25,0.25,0.25,0.25,0.25,0.4,0.4,0.4,0.4,0.4,0.5,0.5,0.5,0.5,0.5,0.6,0.6,0.6,0.6,0.6,0.75,0.75,0.75,0.75,0.75,0.9,0.9,0.9,0.9,0.9,1,1,1,1,1}
+(10 rows)
+
+select a, array_agg(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+ a | array_agg  
+---+------------
+ 1 | {10,20}
+ 2 | {40}
+   | {10,20,40}
+(3 rows)
+
+select grouping(a), array_agg(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+ grouping | array_agg  
+----------+------------
+        0 | {10,20}
+        0 | {40}
+        1 | {10,20,40}
+(3 rows)
+
+select a, sum(b) from aggtest v(a,b) group by rollup (a);
+  a  |   sum   
+-----+---------
+   0 | 0.09561
+  42 |  324.78
+  56 |     7.8
+ 100 |  99.097
+     | 431.773
+(5 rows)
+
+select grouping(a), sum(b) from aggtest v(a,b) group by rollup (a);
+ grouping |   sum   
+----------+---------
+        0 | 0.09561
+        0 |  324.78
+        0 |     7.8
+        0 |  99.097
+        1 | 431.773
+(5 rows)
+
+select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by a,b;
+ grouping 
+----------
+        0
+(1 row)
+
+SELECT four, ten, SUM(SUM(four)) OVER (PARTITION BY four), AVG(ten) FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY four, ten;
+ four | ten |  sum  |          avg           
+------+-----+-------+------------------------
+    0 |   0 |     0 | 0.00000000000000000000
+    0 |   2 |     0 |     2.0000000000000000
+    0 |   4 |     0 |     4.0000000000000000
+    0 |   6 |     0 |     6.0000000000000000
+    0 |   8 |     0 |     8.0000000000000000
+    0 |     |     0 |     4.0000000000000000
+    1 |   1 |  5000 | 1.00000000000000000000
+    1 |   3 |  5000 |     3.0000000000000000
+    1 |   5 |  5000 |     5.0000000000000000
+    1 |   7 |  5000 |     7.0000000000000000
+    1 |   9 |  5000 |     9.0000000000000000
+    1 |     |  5000 |     5.0000000000000000
+    2 |   0 | 10000 | 0.00000000000000000000
+    2 |   2 | 10000 |     2.0000000000000000
+    2 |   4 | 10000 |     4.0000000000000000
+    2 |   6 | 10000 |     6.0000000000000000
+    2 |   8 | 10000 |     8.0000000000000000
+    2 |     | 10000 |     4.0000000000000000
+    3 |   1 | 15000 | 1.00000000000000000000
+    3 |   3 | 15000 |     3.0000000000000000
+    3 |   5 | 15000 |     5.0000000000000000
+    3 |   7 | 15000 |     7.0000000000000000
+    3 |   9 | 15000 |     9.0000000000000000
+    3 |     | 15000 |     5.0000000000000000
+      |     | 15000 |     4.5000000000000000
+(25 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by grouping sets((a,b),());
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+   |  
+(3 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by rollup((a,b));
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+   |  
+(3 rows)
+
+select a, b, sum(c) from (values (1,1,10,5),(1,1,11,5),(1,2,12,5),(1,2,13,5),(1,3,14,5),(2,3,15,5),(3,3,16,5),(3,4,17,5),(4,1,18,5),(4,1,19,5)) v(a,b,c,d) group by rollup ((a,b));
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 2 | 3 |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 4 | 1 |  37
+   |   | 145
+(8 rows)
+
+create temp view tv2(a,b,c,d,e,f,g) as select a[1], a[2], a[3], a[4], a[5], a[6], generate_series(1,3) from (select (array[1,1,1,1,1,1])[1:6-i] || (array[2,2,2,2,2,2])[7-i:6] as a from generate_series(0,6) i) s;
+select a,b, sum(g) from tv2 group by grouping sets ((a,b,c),(a,b));
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  24
+ 1 | 1 |   6
+ 1 | 1 |  30
+ 1 | 2 |   6
+ 1 | 2 |   6
+ 2 | 2 |   6
+ 2 | 2 |   6
+(7 rows)
+
+SELECT grouping(onek.four),grouping(tenk1.four) FROM onek,tenk1 GROUP BY ROLLUP(onek.four,tenk1.four);
+ grouping | grouping 
+----------+----------
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        1
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        1
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        1
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        0
+        0 |        1
+        1 |        1
+(21 rows)
+
+CREATE TEMP TABLE testgs_emptytable(a int,b int,c int);
+SELECT sum(a) FROM testgs_emptytable GROUP BY ROLLUP(a,b);
+ sum 
+-----
+    
+(1 row)
+
+SELECT grouping(four), ten FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY four, ten;
+ grouping | ten 
+----------+-----
+        0 |   0
+        0 |   2
+        0 |   4
+        0 |   6
+        0 |   8
+        0 |    
+        0 |   1
+        0 |   3
+        0 |   5
+        0 |   7
+        0 |   9
+        0 |    
+        0 |   0
+        0 |   2
+        0 |   4
+        0 |   6
+        0 |   8
+        0 |    
+        0 |   1
+        0 |   3
+        0 |   5
+        0 |   7
+        0 |   9
+        0 |    
+        1 |    
+(25 rows)
+
+SELECT grouping(four), ten FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY ten;
+ grouping | ten 
+----------+-----
+        0 |   0
+        0 |   0
+        0 |   1
+        0 |   1
+        0 |   2
+        0 |   2
+        0 |   3
+        0 |   3
+        0 |   4
+        0 |   4
+        0 |   5
+        0 |   5
+        0 |   6
+        0 |   6
+        0 |   7
+        0 |   7
+        0 |   8
+        0 |   8
+        0 |   9
+        0 |   9
+        0 |    
+        0 |    
+        0 |    
+        0 |    
+        1 |    
+(25 rows)
+
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index c0416f4..b15119e 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: privileges security_label collate matview lock replica_identity
+test: privileges security_label collate matview lock replica_identity groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 16a1905..5e64468 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..c659c8a
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,49 @@
+select a, b from (values (1,2)) v(a,b) group by rollup (a,b);
+
+select a, sum(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b); 
+
+select (select grouping(a,b) from (values (1)) v2(b)) from (values (1)) v1(a) group by a;
+
+select grouping(p), percentile_disc(p) within group (order by x::float8), array_agg(p)
+from generate_series(1,5) x,
+     (values (0::float8),(0.1),(0.25),(0.4),(0.5),(0.6),(0.75),(0.9),(1)) v(p)
+group by rollup (p) order by p;
+
+select a, array_agg(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+
+select grouping(a), array_agg(b) from (values (1,10),(1,20),(2,40)) v(a,b) group by rollup (a);
+
+select a, sum(b) from aggtest v(a,b) group by rollup (a);
+
+select grouping(a), sum(b) from aggtest v(a,b) group by rollup (a);
+
+select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by a,b;
+
+SELECT four, ten, SUM(SUM(four)) OVER (PARTITION BY four), AVG(ten) FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY four, ten;
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by grouping sets((a,b),());
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by rollup((a,b));
+
+select a, b, sum(c) from (values (1,1,10,5),(1,1,11,5),(1,2,12,5),(1,2,13,5),(1,3,14,5),(2,3,15,5),(3,3,16,5),(3,4,17,5),(4,1,18,5),(4,1,19,5)) v(a,b,c,d) group by rollup ((a,b));
+
+create temp view tv2(a,b,c,d,e,f,g) as select a[1], a[2], a[3], a[4], a[5], a[6], generate_series(1,3) from (select (array[1,1,1,1,1,1])[1:6-i] || (array[2,2,2,2,2,2])[7-i:6] as a from generate_series(0,6) i) s;
+
+select a,b, sum(g) from tv2 group by grouping sets ((a,b,c),(a,b));
+
+SELECT grouping(onek.four),grouping(tenk1.four) FROM onek,tenk1 GROUP BY ROLLUP(onek.four,tenk1.four);
+
+CREATE TEMP TABLE testgs_emptytable(a int,b int,c int);
+
+SELECT sum(a) FROM testgs_emptytable GROUP BY ROLLUP(a,b);
+
+SELECT grouping(four), ten FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY four, ten;
+
+SELECT grouping(four), ten FROM tenk1
+GROUP BY ROLLUP(four, ten) ORDER BY ten;
+
+
#2Atri Sharma
atri.jiit@gmail.com
In reply to: Atri Sharma (#1)
1 attachment(s)
Re: WIP Patch for GROUPING SETS phase 1

On Thu, Aug 14, 2014 at 12:07 AM, Atri Sharma <atri.jiit@gmail.com> wrote:

This is phase 1 (of either 2 or 3) of implementation of the standard GROUPING SETS feature, done by Andrew Gierth and myself.

Unlike previous attempts at this feature, we make no attempt to do any serious work in the parser; we perform some minor syntactic simplifications described in the spec, such as removing excess parens, but the original query structure is preserved in views and so on.

So far, we have done most of the actual work in the executor, but further phases will concentrate on the planner. We have not yet tackled the hard problem of generating plans that require multiple passes over the same input data; see below regarding design issues.

What works so far:

- all the standard syntax is accepted (but many combinations are not plannable yet)

- while the spec only allows column references in GROUP BY, we continue to allow arbitrary expressions

- grouping sets which can be computed in a single pass over sorted data (i.e. anything that can be reduced to simple columns plus one ROLLUP clause, regardless of how it was specified in the query), are implemented as part of the existing GroupAggregate executor node

- all kinds of aggregate functions, including ordered set functions and user-defined aggregates, are supported in conjunction with grouping sets (no API changes, other than one caveat about fn_extra)

- the GROUPING() operation defined in the spec is implemented, including support for multiple args, and supports arbitrary expressions as an extension to the spec

Changes/incompatibilities:

- the big compatibility issue: CUBE and ROLLUP are now partially reserved (col_name_keyword), which breaks contrib/cube. A separate patch for contrib/ is attached that renames the cube type to "cube"; a new name really needs to be chosen.

- GROUPING is now a fully reserved word, and SETS is an unreserved keyword

- GROUP BY (a,b) now means GROUP BY a,b (as required by spec). GROUP BY ROW(a,b) still has the old meaning.

- GROUP BY () is now supported too.

- fn_extra for aggregate calls is per-call-site and NOT per-transition-value - the same fn_extra will be used for interleaved calls to the transition function with different transition values. fn_extra, if used at all, should be used only for per-call-site info such as data types, as clarified in the 9.4beta changes to the ordered set function implementation.

Future work:

We envisage that handling of arbitrary grouping sets will be best done by having the planner generating an Append of multiple aggregation paths, presumably with some way of moving the original input path to a CTE. We have not really explored yet how hard this will be; suggestions are welcome.

In the executor, it is obviously possible to extend HashAggregate to handle arbitrary collections of grouping sets, but even if the memory usage issue were solved, this would leave the question of what to do with non-hashable data types, so it seems that the planner work probably can't be avoided.

A new name needs to be found for the "cube" data type.

At this point we are more interested in design review rather than necessarily committing this patch in its current state. However, committing it may make future work easier; we leave that question open.

Sorry, forgot to attach the patch for fixing cube in contrib, which breaks
since we now reserve "cube" keyword. Please find attached the same.

Regards,

Atri

Attachments:

cube_contribfixes.patchtext/x-diff; charset=US-ASCII; name=cube_contribfixes.patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 6859682..eacffce 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
#3Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Atri Sharma (#2)
Re: WIP Patch for GROUPING SETS phase 1

On 08/13/2014 09:43 PM, Atri Sharma wrote:

Sorry, forgot to attach the patch for fixing cube in contrib, which breaks
since we now reserve "cube" keyword. Please find attached the same.

Ugh, that will make everyone using the cube extension unhappy. After
this patch, they will have to quote contrib's cube type and functions
every time.

I think we should bite the bullet and rename the extension, and its
"cube" type and functions. For an application, having to suddenly quote
it has the same effect as renaming it; you'll have to find all the
callers and change them. And in the long-run, it's clearly better to
have an unambiguous name.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Heikki Linnakangas (#3)
Re: WIP Patch for GROUPING SETS phase 1

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

On 08/13/2014 09:43 PM, Atri Sharma wrote:

Sorry, forgot to attach the patch for fixing cube in contrib,
which breaks since we now reserve "cube" keyword. Please find
attached the same.

Heikki> Ugh, that will make everyone using the cube extension
Heikki> unhappy. After this patch, they will have to quote contrib's
Heikki> cube type and functions every time.

Heikki> I think we should bite the bullet and rename the extension,

I agree, the contrib/cube patch as posted is purely so we could test
everything without having to argue over the new name first. (And it
is posted separately from the main patch because of its length and
utter boringness.)

However, even if/when a new name is chosen, there's the question of
how to make the upgrade path easiest. Once CUBE is reserved,
up-to-date pg_dump will quote all uses of the "cube" type and function
when dumping an older database (except inside function bodies of
course), so there may be merit in keeping a "cube" domain over the new
type, and maybe also merit in keeping the extension name.

So what's the new type name going to be? cuboid? hypercube?
geometric_cube? n_dimensional_box?

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Atri Sharma (#1)
Re: WIP Patch for GROUPING SETS phase 1

A progress update:

Atri> We envisage that handling of arbitrary grouping sets will be
Atri> best done by having the planner generating an Append of
Atri> multiple aggregation paths, presumably with some way of moving
Atri> the original input path to a CTE. We have not really explored
Atri> yet how hard this will be; suggestions are welcome.

This idea was abandoned.

Instead, we have implemented full support for arbitrary grouping sets
by means of a chaining system:

explain (verbose, costs off) select four, ten, hundred, count(*) from onek group by cube(four,ten,hundred);

QUERY PLAN
-----------------------------------------------------------------------------------------------------
GroupAggregate
Output: four, ten, hundred, count(*)
Grouping Sets: (onek.hundred, onek.four, onek.ten), (onek.hundred, onek.four), (onek.hundred), ()
-> Sort
Output: four, ten, hundred
Sort Key: onek.hundred, onek.four, onek.ten
-> ChainAggregate
Output: four, ten, hundred
Grouping Sets: (onek.ten, onek.hundred), (onek.ten)
-> Sort
Output: four, ten, hundred
Sort Key: onek.ten, onek.hundred
-> ChainAggregate
Output: four, ten, hundred
Grouping Sets: (onek.four, onek.ten), (onek.four)
-> Sort
Output: four, ten, hundred
Sort Key: onek.four, onek.ten
-> Seq Scan on public.onek
Output: four, ten, hundred
(20 rows)

The ChainAggregate nodes use a tuplestore to communicate with the
GroupAggregate node at the top of the chain; they pass through input
tuples unchanged, and write aggregated result rows to the tuplestore,
which the top node then returns once it has finished its own result.

The organization of the planner code seems to be actively hostile to
any attempt to break out new CTEs on the fly, or to plan parts of the
query more than once; the method above seems to be the easiest way to
avoid those issues.

Atri> At this point we are more interested in design review rather
Atri> than necessarily committing this patch in its current state.

This no longer applies; we expect to post within a day or two an
updated patch with full functionality.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Andrew Gierth (#5)
Re: WIP Patch for GROUPING SETS phase 1

On 08/21/2014 01:28 PM, Andrew Gierth wrote:

A progress update:

Atri> We envisage that handling of arbitrary grouping sets will be
Atri> best done by having the planner generating an Append of
Atri> multiple aggregation paths, presumably with some way of moving
Atri> the original input path to a CTE. We have not really explored
Atri> yet how hard this will be; suggestions are welcome.

This idea was abandoned.

Instead, we have implemented full support for arbitrary grouping sets
by means of a chaining system:

explain (verbose, costs off) select four, ten, hundred, count(*) from onek group by cube(four,ten,hundred);

QUERY PLAN
-----------------------------------------------------------------------------------------------------
GroupAggregate
Output: four, ten, hundred, count(*)
Grouping Sets: (onek.hundred, onek.four, onek.ten), (onek.hundred, onek.four), (onek.hundred), ()
-> Sort
Output: four, ten, hundred
Sort Key: onek.hundred, onek.four, onek.ten
-> ChainAggregate
Output: four, ten, hundred
Grouping Sets: (onek.ten, onek.hundred), (onek.ten)
-> Sort
Output: four, ten, hundred
Sort Key: onek.ten, onek.hundred
-> ChainAggregate
Output: four, ten, hundred
Grouping Sets: (onek.four, onek.ten), (onek.four)
-> Sort
Output: four, ten, hundred
Sort Key: onek.four, onek.ten
-> Seq Scan on public.onek
Output: four, ten, hundred
(20 rows)

Uh, that's ugly. The EXPLAIN out I mean; as an implementation detail
chaining the nodes might be reasonable. But the above gets unreadable if
you have more than a few grouping sets.

The ChainAggregate nodes use a tuplestore to communicate with the
GroupAggregate node at the top of the chain; they pass through input
tuples unchanged, and write aggregated result rows to the tuplestore,
which the top node then returns once it has finished its own result.

Hmm, so there's a "magic link" between the GroupAggregate at the top and
all the ChainAggregates, via the tuplestore. That may be fine, we have
special rules in passing information between bitmap scan nodes too.

But rather than chain multiple ChainAggregate nodes, how about just
doing all the work in the top GroupAggregate node?

Atri> At this point we are more interested in design review rather
Atri> than necessarily committing this patch in its current state.

This no longer applies; we expect to post within a day or two an
updated patch with full functionality.

Ok, cool

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Gierth (#4)
Re: WIP Patch for GROUPING SETS phase 1

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:
Heikki> I think we should bite the bullet and rename the extension,

I agree, the contrib/cube patch as posted is purely so we could test
everything without having to argue over the new name first.

I wonder if you've tried hard enough to avoid reserving the keyword.

I think that the cube extension is not going to be the only casualty
if "cube" becomes a reserved word --- that seems like a name that
could be in use in lots of applications. ("What do you mean, 9.5
breaks our database for tracking office space?") It would be worth
quite a bit of effort to avoid that.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Heikki Linnakangas (#6)
Re: WIP Patch for GROUPING SETS phase 1

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

Heikki> Uh, that's ugly. The EXPLAIN out I mean; as an implementation
Heikki> detail chaining the nodes might be reasonable. But the above
Heikki> gets unreadable if you have more than a few grouping sets.

It's good for highlighting performance issues in EXPLAIN, too.

4096 grouping sets takes about a third of a second to plan and execute,
but something like a minute to generate the EXPLAIN output. However,
for more realistic sizes, plan time is not significant and explain
takes only about 40ms for 256 grouping sets.

(To avoid resource exhaustion issues, we have set a limit of,
currently, 4096 grouping sets per query level. Without such a limit,
it is easy to write queries that would take TBs of memory to parse or
plan. MSSQL and DB2 have similar limits, I'm told.)

The ChainAggregate nodes use a tuplestore to communicate with the
GroupAggregate node at the top of the chain; they pass through input
tuples unchanged, and write aggregated result rows to the tuplestore,
which the top node then returns once it has finished its own result.

Heikki> Hmm, so there's a "magic link" between the GroupAggregate at
Heikki> the top and all the ChainAggregates, via the tuplestore. That
Heikki> may be fine, we have special rules in passing information
Heikki> between bitmap scan nodes too.

Eh. It's far from a perfect solution, but the planner doesn't lend itself
to perfect solutions.

Heikki> But rather than chain multiple ChainAggregate nodes, how
Heikki> about just doing all the work in the top GroupAggregate node?

It was easier this way. (How would you expect to do it all in the top
node when each subset of the grouping sets list needs to see the data
in a different order?)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Pavel Stehule
pavel.stehule@gmail.com
In reply to: Tom Lane (#7)
Re: WIP Patch for GROUPING SETS phase 1

2014-08-21 17:00 GMT+02:00 Tom Lane <tgl@sss.pgh.pa.us>:

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:
Heikki> I think we should bite the bullet and rename the extension,

I agree, the contrib/cube patch as posted is purely so we could test
everything without having to argue over the new name first.

I wonder if you've tried hard enough to avoid reserving the keyword.

I think that the cube extension is not going to be the only casualty
if "cube" becomes a reserved word --- that seems like a name that
could be in use in lots of applications. ("What do you mean, 9.5
breaks our database for tracking office space?") It would be worth
quite a bit of effort to avoid that.

My prototypes worked without reserved keywords if I remember well

but analyzer is relatively complex

Pavel

Show quoted text

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#7)
Re: WIP Patch for GROUPING SETS phase 1

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

I agree, the contrib/cube patch as posted is purely so we could test
everything without having to argue over the new name first.

Tom> I wonder if you've tried hard enough to avoid reserving the keyword.

GROUP BY cube(a,b) is currently legal syntax and means something completely
incompatible to what the spec requires.

GROUP BY GROUPING SETS (cube(a,b), c) -- is that cube(a,b) an expression
to group on, or a list of grouping sets to expand?

GROUP BY (cube(a,b)) -- should that be an error, or silently treat it
as a function call rather than a grouping set? What about GROUP BY
GROUPING SETS ((cube(a,b)) ? (both are errors in our patch)

Accepting those as valid implies a degree of possible confusion that I
personally regard as quite questionable. Previous discussion seemed to
have accepted that contrib/cube was going to have to be renamed.

Tom> I think that the cube extension is not going to be the only
Tom> casualty if "cube" becomes a reserved word --- that seems like a
Tom> name that could be in use in lots of applications. ("What do
Tom> you mean, 9.5 breaks our database for tracking office space?")
Tom> It would be worth quite a bit of effort to avoid that.

It has been a reserved word in the spec since, what, 1999? and it is a
reserved word in mssql, oracle, db2, etc.?

It only needs to be a col_name_keyword, so it still works as a table
or column name (as usual we are less strict than the spec in that
respect). I'm looking into whether it can be made unreserved, but I
have serious doubts about this being a good idea.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Gierth (#10)
Re: WIP Patch for GROUPING SETS phase 1

2014-08-21 17:58 GMT+02:00 Andrew Gierth <andrew@tao11.riddles.org.uk>:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

I agree, the contrib/cube patch as posted is purely so we could test
everything without having to argue over the new name first.

Tom> I wonder if you've tried hard enough to avoid reserving the keyword.

GROUP BY cube(a,b) is currently legal syntax and means something
completely
incompatible to what the spec requires.

GROUP BY GROUPING SETS (cube(a,b), c) -- is that cube(a,b) an expression
to group on, or a list of grouping sets to expand?

GROUP BY (cube(a,b)) -- should that be an error, or silently treat it
as a function call rather than a grouping set? What about GROUP BY
GROUPING SETS ((cube(a,b)) ? (both are errors in our patch)

Accepting those as valid implies a degree of possible confusion that I
personally regard as quite questionable. Previous discussion seemed to
have accepted that contrib/cube was going to have to be renamed.

Tom> I think that the cube extension is not going to be the only
Tom> casualty if "cube" becomes a reserved word --- that seems like a
Tom> name that could be in use in lots of applications. ("What do
Tom> you mean, 9.5 breaks our database for tracking office space?")
Tom> It would be worth quite a bit of effort to avoid that.

It has been a reserved word in the spec since, what, 1999? and it is a
reserved word in mssql, oracle, db2, etc.?

It only needs to be a col_name_keyword, so it still works as a table
or column name (as usual we are less strict than the spec in that
respect). I'm looking into whether it can be made unreserved, but I
have serious doubts about this being a good idea.

+1

contrib module should be renamed - more - current name is confusing against
usual functionality related to words CUBE and ROLLUP

Pavel

Show quoted text

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Gierth (#10)
Re: WIP Patch for GROUPING SETS phase 1

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
Tom> I wonder if you've tried hard enough to avoid reserving the keyword.

GROUP BY cube(a,b) is currently legal syntax and means something completely
incompatible to what the spec requires.

Well, if there are any extant applications that use that exact phrasing,
they're going to be broken in any case. That does not mean that we have
to break every other appearance of "cube". I think that special-casing
appearances of cube(...) in GROUP BY lists might be a feasible approach.

Basically, I'm afraid that unilaterally renaming cube is going to break
enough applications that there will be more people who flat out don't
want this patch than there will be who get benefit from it, and we end
up voting to revert the feature altogether. If you'd like to take that
risk then feel free to charge full steam ahead, but don't say you were
not warned. And don't bother arguing that CUBE is reserved according to
the standard, because that will not make one damn bit of difference
to the people who will be unhappy.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Merlin Moncure
mmoncure@gmail.com
In reply to: Tom Lane (#12)
Re: WIP Patch for GROUPING SETS phase 1

On Thu, Aug 21, 2014 at 1:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
Tom> I wonder if you've tried hard enough to avoid reserving the keyword.

GROUP BY cube(a,b) is currently legal syntax and means something completely
incompatible to what the spec requires.

Well, if there are any extant applications that use that exact phrasing,
they're going to be broken in any case. That does not mean that we have
to break every other appearance of "cube". I think that special-casing
appearances of cube(...) in GROUP BY lists might be a feasible approach.

Basically, I'm afraid that unilaterally renaming cube is going to break
enough applications that there will be more people who flat out don't
want this patch than there will be who get benefit from it, and we end
up voting to revert the feature altogether. If you'd like to take that
risk then feel free to charge full steam ahead, but don't say you were
not warned. And don't bother arguing that CUBE is reserved according to
the standard, because that will not make one damn bit of difference
to the people who will be unhappy.

I have to respectfully disagree. Certainly, if there is some
reasonable way to not have to change 'cube' then great. But the
tonnage rule applies here: even considering compatibility issues, when
considering the importance of standard SQL (and, I might add,
exceptionally useful) syntax and a niche extension, 'cube' is going to
have to get out of the way. There are view valid reasons to break
compatibility but blocking standard syntax is definitely one of them.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Andrew Dunstan
andrew@dunslane.net
In reply to: Merlin Moncure (#13)
Re: WIP Patch for GROUPING SETS phase 1

On 08/21/2014 02:48 PM, Merlin Moncure wrote:

Basically, I'm afraid that unilaterally renaming cube is going to break
enough applications that there will be more people who flat out don't
want this patch than there will be who get benefit from it, and we end
up voting to revert the feature altogether. If you'd like to take that
risk then feel free to charge full steam ahead, but don't say you were
not warned. And don't bother arguing that CUBE is reserved according to
the standard, because that will not make one damn bit of difference
to the people who will be unhappy.

I have to respectfully disagree. Certainly, if there is some
reasonable way to not have to change 'cube' then great. But the
tonnage rule applies here: even considering compatibility issues, when
considering the importance of standard SQL (and, I might add,
exceptionally useful) syntax and a niche extension, 'cube' is going to
have to get out of the way. There are view valid reasons to break
compatibility but blocking standard syntax is definitely one of them.

I'm inclined to think that the audience for this is far larger than the
audience for the cube extension, which I have not once encountered in
the field.

But I guess we all have different experiences.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#14)
Re: WIP Patch for GROUPING SETS phase 1

Andrew Dunstan <andrew@dunslane.net> writes:

I'm inclined to think that the audience for this is far larger than the
audience for the cube extension, which I have not once encountered in
the field.

Perhaps so. I would really prefer not to have to get into estimating
how many people will be inconvenienced how badly. It's clear to me
that not a lot of sweat has been put into seeing if we can avoid
reserving the keyword, and I think we need to put in that effort.
We've jumped through some pretty high hoops to avoid reserving keywords in
the past, so I don't think this patch should get a free pass on the issue.

Especially considering that renaming the cube extension isn't exactly
going to be zero work: there is no infrastructure for such a thing.
A patch consisting merely of s/cube/foobar/g isn't going to cut it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Stephen Frost
sfrost@snowman.net
In reply to: Tom Lane (#15)
Re: WIP Patch for GROUPING SETS phase 1

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

I'm inclined to think that the audience for this is far larger than the
audience for the cube extension, which I have not once encountered in
the field.

+1

Perhaps so. I would really prefer not to have to get into estimating
how many people will be inconvenienced how badly. It's clear to me
that not a lot of sweat has been put into seeing if we can avoid
reserving the keyword, and I think we need to put in that effort.

I'm with Merlin on this one, it's going to end up happening and I don't
know that 9.5 is any worse than post-9.5 to make this change.

We've jumped through some pretty high hoops to avoid reserving keywords in
the past, so I don't think this patch should get a free pass on the issue.

This doesn't feel like an attempt to get a free pass on anything- it's
not being proposed as fully reserved and there is spec-defined syntax
which needs to be supported. If we can get away with keeping it
unreserved while not making it utterly confusing for users and
convoluting the code, great, but that doesn't seem likely to pan out.

Especially considering that renaming the cube extension isn't exactly
going to be zero work: there is no infrastructure for such a thing.
A patch consisting merely of s/cube/foobar/g isn't going to cut it.

This is a much more interesting challenge to deal with, but perhaps we
could include a perl script or pg_upgrade snippet for users to run to
see if they have the extension installed and to do some magic before the
actual upgrade to handle the rename..?

Thanks,

Stephen

#17David Fetter
david@fetter.org
In reply to: Stephen Frost (#16)
Re: WIP Patch for GROUPING SETS phase 1

On Thu, Aug 21, 2014 at 06:15:33PM -0400, Stephen Frost wrote:

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

I'm inclined to think that the audience for this is far larger than the
audience for the cube extension, which I have not once encountered in
the field.

+1

I haven't seen it in the field either.

I'd also like to mention that the mere presence of a module in our
contrib/ directory can reflect bad decisions that need reversing just
as easily as it can the presence of vitally important utilities that
need to be preserved. I'm pretty sure the cube extension arrived
after the CUBE keyword in SQL, which makes that an error on our part
if true.

Especially considering that renaming the cube extension isn't
exactly going to be zero work: there is no infrastructure for such
a thing. A patch consisting merely of s/cube/foobar/g isn't going
to cut it.

This is a much more interesting challenge to deal with, but perhaps
we could include a perl script or pg_upgrade snippet for users to
run to see if they have the extension installed and to do some magic
before the actual upgrade to handle the rename..?

+1 for doing this. Do we want to make some kind of generator for such
things? It doesn't seem hard in principle, but I haven't tried coding
it up yet.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Stephen Frost (#16)
Re: WIP Patch for GROUPING SETS phase 1

"Stephen" == Stephen Frost <sfrost@snowman.net> writes:

I'm inclined to think that the audience for this is far larger
than the audience for the cube extension, which I have not once
encountered in the field.

Stephen> +1

Most of my encounters with cube have been me suggesting it to people
on IRC as a possible approach for solving certain kinds of performance
problems by converting them to N-dimensional spatial containment
queries. Sometimes this works well, but it doesn't seem to be an
approach that many people discover on their own.

We've jumped through some pretty high hoops to avoid reserving
keywords in the past, so I don't think this patch should get a
free pass on the issue.

Stephen> This doesn't feel like an attempt to get a free pass on
Stephen> anything- it's not being proposed as fully reserved and
Stephen> there is spec-defined syntax which needs to be supported.
Stephen> If we can get away with keeping it unreserved while not
Stephen> making it utterly confusing for users and convoluting the
Stephen> code, great, but that doesn't seem likely to pan out.

Having now spent some more time looking, I believe there is a solution
which makes it unreserved which does not require any significant pain
in the code. I'm not entirely convinced that this is the right
approach in the long term, but it might allow for a more planned
transition.

The absolute minimum seems to be:

GROUPING as a col_name_keyword (since GROUPING(x,y,...) in the select
list as a <grouping operation> looks like a function call for any
argument types)

CUBE, ROLLUP, SETS as unreserved_keyword

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#15)
Re: WIP Patch for GROUPING SETS phase 1

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

Tom> Perhaps so. I would really prefer not to have to get into
Tom> estimating how many people will be inconvenienced how badly.
Tom> It's clear to me that not a lot of sweat has been put into
Tom> seeing if we can avoid reserving the keyword, and I think we
Tom> need to put in that effort.

So, first nontrivial issue that crops up is this: if CUBE is
unreserved, then ruleutils will output the string "cube(a,b)" for a
function call to a function named "cube", on the assumption that it
will parse back as a single unit (which inside a GROUP BY is not
true).

Options:

1) when outputting GROUP BY clauses (and nothing else), put parens
around anything that isn't provably atomic; or put parens around
anything that might look like a function call; or put parens around
anything that looks like a function call with a keyword name

2) when outputting any function call, add parens if the name is an
unreserved keyword

3) when outputting any function call, quote the name if it is an
unreserved keyword

4) something else?

(This of course means that if someone has a cube() function call in
a group by clause of a view, then upgrading will change the meaning
of the view and possibly fail to create it; there seems to be no fix
for this, not even using the latest pg_dump, since pg_dump relies on
the old server's ruleutils)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Stephen Frost
sfrost@snowman.net
In reply to: Andrew Gierth (#18)
Re: WIP Patch for GROUPING SETS phase 1

* Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:

Having now spent some more time looking, I believe there is a solution
which makes it unreserved which does not require any significant pain
in the code. I'm not entirely convinced that this is the right
approach in the long term, but it might allow for a more planned
transition.

The absolute minimum seems to be:

GROUPING as a col_name_keyword (since GROUPING(x,y,...) in the select
list as a <grouping operation> looks like a function call for any
argument types)

CUBE, ROLLUP, SETS as unreserved_keyword

This means

GROUP BY cube(x,y)
GROUP BY (cube(x,y))
GROUP BY cube(x)

all end up with different meanings though, right?

I'm not sure that's really a better situation.

Thanks,

Stephen

#21Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andrew Gierth (#19)
Re: WIP Patch for GROUPING SETS phase 1

Andrew Gierth wrote:

(This of course means that if someone has a cube() function call in
a group by clause of a view, then upgrading will change the meaning
of the view and possibly fail to create it; there seems to be no fix
for this, not even using the latest pg_dump, since pg_dump relies on
the old server's ruleutils)

This sucks. Can we tweak pg_dump to check for presence of the cube
extension, and if found refuse to dump unless a minor version older than
some hardcoded version (known to have fixed ruleutils) is used?

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#22Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Alvaro Herrera (#21)
Re: WIP Patch for GROUPING SETS phase 1

"Alvaro" == Alvaro Herrera <alvherre@2ndquadrant.com> writes:

(This of course means that if someone has a cube() function call
in a group by clause of a view, then upgrading will change the
meaning of the view and possibly fail to create it; there seems to
be no fix for this, not even using the latest pg_dump, since
pg_dump relies on the old server's ruleutils)

Alvaro> This sucks. Can we tweak pg_dump to check for presence of
Alvaro> the cube extension, and if found refuse to dump unless a
Alvaro> minor version older than some hardcoded version (known to
Alvaro> have fixed ruleutils) is used?

I honestly don't think it's worth it. cube() is not a function that
really makes any sense in a GROUP BY, though of course someone could
have written their own function called cube() that does something
else; while this case is a problem, it is also likely to be
vanishingly rare.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#12)
Re: WIP Patch for GROUPING SETS phase 1

On Thu, Aug 21, 2014 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
Tom> I wonder if you've tried hard enough to avoid reserving the keyword.

GROUP BY cube(a,b) is currently legal syntax and means something completely
incompatible to what the spec requires.

Well, if there are any extant applications that use that exact phrasing,
they're going to be broken in any case. That does not mean that we have
to break every other appearance of "cube". I think that special-casing
appearances of cube(...) in GROUP BY lists might be a feasible approach.

Not really. As pointed out downthread, you can't distinguish "cube"
from CUBE. We could fix that with a big enough hammer, of course, but
it would be a mighty big hammer.

More generally, I think it makes a lot of sense to "work harder" to
reserve keywords less when there's no fundamental semantic conflict,
but when there is, trying to do things like special case stuff
depending on context results in a situation where some keywords are a
little more reserved than others. As you pointed out in discussions
of CREATE INDEX CONCURRENTLY, that's confusing:

/messages/by-id/10769.1261775601@sss.pgh.pa.us
(refer second paragraph)

I think we should:

(1) Rename the cube extension. With a bat. I have yet to encounter a
single user who is using it, but there probably are some. They'll
have to get over it; GROUPING SETS is roughly a hundred times more
important than the cube extension. The most I'd do to cater to
existing users of the extension is provide an SQL script someplace
that renames the extension and all of its containing objects so that
you can do that before running pg_dump/pg_upgrade.

(2) Reserve CUBE to the extent necessary to implement this feature.
Some people won't like this, but that's always true when we reserve a
keyword, and I don't think refusing to implement an SQL-standard
feature for which there is considerable demand is the right way to fix
that. Here are the last five keywords we partially or fully reserved:
LATERAL (fully reserved), COLLATION (type_func_name), XMLEXISTS
(col_name), BETWEEN (was type_func_name, became col_name),
CONCURRENTLY (type_func_name). That takes us back to December 2009,
so the rate at which we do this is just under one a year, and there
haven't been many screams about it. I grant you that "cube" is a
slightly more plausible identifier than any of those, but I don't
think we should let the fact that we happen to have an extension with
that name prejudice us too much about how common it really is.

Mind you, I'm not trying to say that we don't need to be judicious in
reserving keywords, or even adding them at all: I've argued against
those things on numerous occasions, and have done work to let us get
rid of keywords we've previously had. I just think that this is a big
enough, important enough feature that we'll please more people than we
disappoint. And I think trying to walk some middle way where we
distinguish on context is going to be a mess.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#23)
Re: WIP Patch for GROUPING SETS phase 1

Robert Haas <robertmhaas@gmail.com> writes:

On Thu, Aug 21, 2014 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Well, if there are any extant applications that use that exact phrasing,
they're going to be broken in any case. That does not mean that we have
to break every other appearance of "cube". I think that special-casing
appearances of cube(...) in GROUP BY lists might be a feasible approach.

Not really. As pointed out downthread, you can't distinguish "cube"
from CUBE. We could fix that with a big enough hammer, of course, but
it would be a mighty big hammer.

I'm not convinced of that; I think some creative hackery in the grammar
might be able to deal with this. It would be a bit ugly, for sure, but
if it works it would be a localized fix. Meanwhile, I don't believe
that it's going to be possible to rename the cube extension in any way
that's even remotely acceptable for its users ("remotely acceptable"
here means "pg_upgrade works", never mind what's going to be needed
to fix their applications). So the proposal you are pushing is going
to result in seriously teeing off some fraction of our userbase;
and the argument why that would be acceptable seems to boil down to
"I think there are few enough of them that we don't have to care"
(an opinion based on little evidence IMO). I think it's worth investing
some work, and perhaps accepting some ugly code, to try to avoid that.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#25Greg Stark
stark@mit.edu
In reply to: Tom Lane (#24)
Re: WIP Patch for GROUPING SETS phase 1

On Fri, Aug 22, 2014 at 7:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

So the proposal you are pushing is going
to result in seriously teeing off some fraction of our userbase;
and the argument why that would be acceptable seems to boil down to
"I think there are few enough of them that we don't have to care"
(an opinion based on little evidence IMO

FWIW here's some evidence... Craig Kersteins did a talk on the
statistics across the Heroku fleet: Here are the slides from 2013
though I think there's an updated slide deck with more recent numbers
out there:
https://speakerdeck.com/craigkerstiens/postgres-what-they-really-use

Cube shows up as the number 9 most popular extension with about 1% of
databases having it installed (tied with pg_crypto and earthdistance).
That's a lot more than I would have expected actually.

Personally I would love to change the name because I always found the
name the most confusing thing about it. It took me forever to figure
out what on earth a "cube" was. It's actually a vector data type which
is actually a pretty useful idea.

--
greg

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#24)
Re: WIP Patch for GROUPING SETS phase 1

On Fri, Aug 22, 2014 at 2:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

On Thu, Aug 21, 2014 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Well, if there are any extant applications that use that exact phrasing,
they're going to be broken in any case. That does not mean that we have
to break every other appearance of "cube". I think that special-casing
appearances of cube(...) in GROUP BY lists might be a feasible approach.

Not really. As pointed out downthread, you can't distinguish "cube"
from CUBE. We could fix that with a big enough hammer, of course, but
it would be a mighty big hammer.

I'm not convinced of that; I think some creative hackery in the grammar
might be able to deal with this. It would be a bit ugly, for sure, but
if it works it would be a localized fix.

Well, I have no idea how to do that. I think the only way you'd be
able to is if you make productions like ColId and ColLabel return
something different for a keyword than they do for an IDENT. And
that's not going to be a localized change. If you've got another
proposal, I'm all ears...

Meanwhile, I don't believe
that it's going to be possible to rename the cube extension in any way
that's even remotely acceptable for its users ("remotely acceptable"
here means "pg_upgrade works", never mind what's going to be needed
to fix their applications).

So the proposal you are pushing is going
to result in seriously teeing off some fraction of our userbase;
and the argument why that would be acceptable seems to boil down to
"I think there are few enough of them that we don't have to care"
(an opinion based on little evidence IMO).

The only hard statistics I am aware of are from Heroku. Peter
Geoghegan was kind enough to find me the link:

https://www.youtube.com/watch?v=MT2gzzbyWpw

At around 8 minutes, he shows utilization statistics for cube at
around 1% across their install base. That's higher than I would have
guessed, so, eh, shows what I know. In any case, I'm not so much
advocating not caring at all as confining the level of caring to what
can be done without moving the earth.

I think it's worth investing
some work, and perhaps accepting some ugly code, to try to avoid that.

I can accept ugly code, but I feel strongly that we shouldn't accept
ugly semantics. Forcing cube to get out of the way may not be pretty,
but I think it will be much worse if we violate the rule that quoting
a keyword strips it of its special meaning; or the rule that there are
four kinds of keywords and, if a keyword of a particular class is
accepted as an identifier in a given context, all other keywords in
that class will also be accepted as identifiers in that context.
Violating those rules could have not-fun-at-all consequences like
needing to pass additional context information to ruleutils.c's
quote_identifier() function, or not being able to dump and restore a
query from an older version even with --quote-all-identifiers.
Renaming the cube type will suck for people who are using it; but it
will only have to be done once; weird stuff like the above will be
with us forever.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#27Andrew Dunstan
andrew@dunslane.net
In reply to: Greg Stark (#25)
Re: WIP Patch for GROUPING SETS phase 1

On 08/22/2014 02:42 PM, Greg Stark wrote:

On Fri, Aug 22, 2014 at 7:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

So the proposal you are pushing is going
to result in seriously teeing off some fraction of our userbase;
and the argument why that would be acceptable seems to boil down to
"I think there are few enough of them that we don't have to care"
(an opinion based on little evidence IMO

FWIW here's some evidence... Craig Kersteins did a talk on the
statistics across the Heroku fleet: Here are the slides from 2013
though I think there's an updated slide deck with more recent numbers
out there:
https://speakerdeck.com/craigkerstiens/postgres-what-they-really-use

Cube shows up as the number 9 most popular extension with about 1% of
databases having it installed (tied with pg_crypto and earthdistance).
That's a lot more than I would have expected actually.

That's an interesting statistic. What I'd be more interested in is
finding out how many of those are actually using it as opposed to having
loaded it into a database.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#28Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#26)
Re: WIP Patch for GROUPING SETS phase 1

On Fri, Aug 22, 2014 at 1:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Aug 22, 2014 at 2:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
https://www.youtube.com/watch?v=MT2gzzbyWpw

At around 8 minutes, he shows utilization statistics for cube at
around 1% across their install base. That's higher than I would have
guessed, so, eh, shows what I know. In any case, I'm not so much
advocating not caring at all as confining the level of caring to what
can be done without moving the earth.

cube is a dependency for earthdistance and it's gotten some light
advocacy throughout the years as the 'way to do it'. I tried it
myself way back in the day and concluded a while back that the 'box'
type + gist was better than earthdistance for bounding box operations
-- it just seemed easier to understand and use. If you search the
archives you'll probably find a couple of examples of me suggesting as
such.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Stephen Frost
sfrost@snowman.net
In reply to: Andrew Dunstan (#27)
Re: WIP Patch for GROUPING SETS phase 1

* Andrew Dunstan (andrew@dunslane.net) wrote:

On 08/22/2014 02:42 PM, Greg Stark wrote:

On Fri, Aug 22, 2014 at 7:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

So the proposal you are pushing is going
to result in seriously teeing off some fraction of our userbase;
and the argument why that would be acceptable seems to boil down to
"I think there are few enough of them that we don't have to care"
(an opinion based on little evidence IMO

FWIW here's some evidence... Craig Kersteins did a talk on the
statistics across the Heroku fleet: Here are the slides from 2013
though I think there's an updated slide deck with more recent numbers
out there:
https://speakerdeck.com/craigkerstiens/postgres-what-they-really-use

Cube shows up as the number 9 most popular extension with about 1% of
databases having it installed (tied with pg_crypto and earthdistance).
That's a lot more than I would have expected actually.

That's an interesting statistic. What I'd be more interested in is
finding out how many of those are actually using it as opposed to
having loaded it into a database.

Agreed- and how many of those have *every extension available* loaded...

Thanks,

Stephen

#30Greg Stark
stark@mit.edu
In reply to: Stephen Frost (#29)
Re: WIP Patch for GROUPING SETS phase 1

On Fri, Aug 22, 2014 at 10:37 PM, Stephen Frost <sfrost@snowman.net> wrote:

Agreed- and how many of those have *every extension available* loaded...

Actually that was also in the talk.a few slides later. 0.7%

--
greg

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#31Stephen Frost
sfrost@snowman.net
In reply to: Greg Stark (#30)
Re: WIP Patch for GROUPING SETS phase 1

* Greg Stark (stark@mit.edu) wrote:

On Fri, Aug 22, 2014 at 10:37 PM, Stephen Frost <sfrost@snowman.net> wrote:

Agreed- and how many of those have *every extension available* loaded...

Actually that was also in the talk.a few slides later. 0.7%

So, 0.3% install cube w/o installing *every* extension..? That seems
like the more relevant number then, to me anyway.

Admittedly, it's non-zero, but it's also a rather small percentage..

Thanks!

Stephen

#32Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#24)
Re: WIP Patch for GROUPING SETS phase 1

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

Tom> I'm not convinced of that; I think some creative hackery in the
Tom> grammar might be able to deal with this.

Making GROUP BY CUBE(a,b) parse as grouping sets rather than as a
function turned out to be the easy part: give CUBE a lower precedence
than '(' (equal to the one for IDENT and various other unreserved
keywords), and a rule that has an explicit CUBE '(' gets preferred
over one that reduces the CUBE to an unreserved_keyword.

The (relatively minor) ugliness required is mostly in the ruleutils
logic to decide how to output a cube(...) function call in such a way
that it doesn't get misparsed as a grouping set. See my other mail on
that.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Atri Sharma (#1)
5 attachment(s)
Re: Final Patch for GROUPING SETS

Here is the new version of our grouping sets patch. This version
supersedes the previous post.

We believe the functionality of this version to be substantially
complete, providing all the standard grouping set features except T434
(GROUP BY DISTINCT). (Additional tweaks, such as extra variants on
GROUPING(), could be added for compatibility with other databases.)

Since the debate regarding reserved keywords has not produced any
useful answer, the main patch here makes CUBE and ROLLUP into
col_name_reserved keywords, but a separate small patch is attached to
make them unreserved_keywords instead.

So there are now 5 files:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

--
Andrew (irc:RhodiumToad)

Attachments:

gsp1.patchtext/x-patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 781a736..479ae7e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -78,6 +78,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1778,17 +1781,80 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	List	   *result = NIL;
+	bool		useprefix;
+	char	   *exprstr;
+	StringInfoData buf;
+	ListCell   *lc;
+	ListCell   *lc2;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	foreach(lc, gsets)
+	{
+		char *sep = "";
+
+		initStringInfo(&buf);
+		appendStringInfoString(&buf, "(");
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			appendStringInfoString(&buf, sep);
+			appendStringInfoString(&buf, exprstr);
+			sep = ", ";
+		}
+
+		appendStringInfoString(&buf, ")");
+
+		result = lappend(result, buf.data);
+	}
+
+	ExplainPropertyList(qlabel, result, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 7cfa63f..5fb61b0 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,8 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingExpr(GroupingState *gstate, ExprContext *econtext,
+								  bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -568,6 +572,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -645,8 +651,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -694,6 +716,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -2987,6 +3034,40 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingExpr
+ * Return a bitmask with a bit for each column.
+ * A bit is set if the column is not a part of grouping.
+ */
+
+static Datum
+ExecEvalGroupingExpr(GroupingState *gstate,
+					 ExprContext *econtext,
+					 bool *isNull,
+					 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int current_val= 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		current_val = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(current_val, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4385,6 +4466,32 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
+		case T_Grouping:
+			{
+				Grouping	   *grp_node = (Grouping *) node;
+				GroupingState  *grp_state = makeNode(GroupingState);
+				Agg			   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingExpr;
+			}
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index d5e1273..ad8a3d0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -653,7 +653,7 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	 * because those do not represent expressions to be evaluated within the
 	 * overall targetlist's econtext.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, Grouping))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 510d1c5..beecd36 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
 	 * rest.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +304,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReinitialize);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReinitialize)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         i = 0;
+
+	if (numReinitialize < 1)
+		numReinitialize = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (i = 0; i < numReinitialize; i++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstate[i])
+					tuplesort_end(peraggstate->sortstate[i]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstate[i] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
+		/* If ROLLUP is present, we need to iterate over all the groups
+		 * that are present with the current aggstate. If ROLLUP is not
+		 * present, we only have one groupstate associated with the
+		 * current aggstate.
 		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+
+		for (i = 0; i < numReinitialize; i++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (i * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +476,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +524,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +551,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         groupno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +601,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstate[groupno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstate[groupno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +626,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (groupno * numAggs)];
+
+				aggstate->current_set = groupno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +656,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +678,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +690,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstate[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +734,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +745,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +764,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstate[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,8 +818,8 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -832,7 +871,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +955,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, Grouping))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +986,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontext[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1097,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1131,7 +1171,13 @@ agg_retrieve_direct(AggState *aggstate)
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	int			   aggno;
+	bool           hasRollup = aggstate->numsets > 0;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentGroup = 0;
+	int            currentSize = 0;
+	int            numReset = 1;
+	int            i;
 
 	/*
 	 * get state info from node
@@ -1150,131 +1196,233 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
 		 * aggregate functions store working state in such contexts.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
+		 * any registered shutdown callbacks to be called.	That allows
 		 * aggregate functions to ensure they've cleaned up any non-memory
 		 * resources.
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontext[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+		}
 
-		/*
-		 * Initialize working state for a new input tuple group
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
+
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			currentSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			currentSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& currentSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									currentSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			++aggstate->projected_set;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(currentSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * we no longer care what group we just projected, the next projection
+			 * will always be the first (or only) grouping set (unless the input
+			 * proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch it
+			 * from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+				else
+				{
+					/* outer plan produced no tuples at all */
+					if (hasRollup)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasRollup)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentGroup = aggstate->projected_set;
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentGroup * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1292,6 +1440,9 @@ agg_retrieve_direct(AggState *aggstate)
 							   &aggvalues[aggno], &aggnulls[aggno]);
 		}
 
+		if (hasRollup)
+			econtext->grouped_cols = aggstate->grouped_cols[currentGroup];
+
 		/*
 		 * Check the qual (HAVING clause); if the group does not match, ignore
 		 * it and loop back to try to process another group.
@@ -1495,6 +1646,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int        numGroupingSets = 1;
+	int        currentsortno = 0;
+	int        i = 0;
+	int        j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1663,69 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
 
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontext = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
+
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts replaces the standalone memory context
+	 * formerly used to hold transition values.  We cheat a little by using
+	 * ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontext and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontext[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1645,7 +1831,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +1895,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstate = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstate[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2206,35 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			i = 0;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (i = 0; i < numGroupingSets; i++)
+		{
+			if (peraggstate->sortstate[i])
+				tuplesort_end(peraggstate->sortstate[i]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (i = 0; i < numGroupingSets; ++i)
+		ReScanExprContext(node->aggcontext[i]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2243,17 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         groupno;
+	int         i;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2279,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (groupno = 0; groupno < numGroupingSets; groupno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstate[groupno])
+			{
+				tuplesort_end(peraggstate->sortstate[groupno]);
+				peraggstate->sortstate[groupno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ReScanExprContext(node->aggcontext[i]);
+		MemoryContextDeleteChildren(node->aggcontext[i]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2096,21 +2315,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,7 +2333,9 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
@@ -2150,8 +2363,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2375,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2463,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index f5ddc1c..72dc86b 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -779,6 +779,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1065,6 +1066,59 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGrouping
+ */
+static Grouping *
+_copyGrouping(const Grouping *from)
+{
+	Grouping		   *newnode = makeNode(Grouping);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_LOCATION_FIELD(location);
+	COPY_SCALAR_FIELD(agglevelsup);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupingSet
+ */
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -2495,6 +2549,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4078,6 +4133,15 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
+		case T_Grouping:
+			retval = _copyGrouping(from);
+			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index ccd6064..43a01b2 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,47 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGrouping(const Grouping *a, const Grouping *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_LOCATION_FIELD(location);
+	COMPARE_SCALAR_FIELD(agglevelsup);
+
+	return true;
+}
+
+static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -864,6 +905,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2555,6 +2597,15 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
+		case T_Grouping:
+			retval = _equalGrouping(a, b);
+			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 5c09d2f..f878d1f 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index da59c58..e930cef 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 41e973b..6a63d1b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,12 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_Grouping:
+			type = INT4OID;
+			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -261,6 +267,10 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_Grouping:
+			return -1;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +744,12 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_Grouping:
+			coll = InvalidOid;
+			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -967,6 +983,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -1003,6 +1022,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_BoolExpr:
 			Assert(!OidIsValid(collation));		/* result is always boolean */
 			break;
+		case T_Grouping:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_SubLink:
 #ifdef USE_ASSERT_CHECKING
 			{
@@ -1182,6 +1204,15 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_Grouping:
+			loc = ((const Grouping *) expr)->location;
+			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1622,6 +1653,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1687,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_Grouping:
+			{
+				Grouping   *grouping = (Grouping *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2185,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2162,6 +2212,17 @@ expression_tree_mutator(Node *node,
 		case T_RangeTblRef:
 		case T_SortGroupClause:
 			return (Node *) copyObject(node);
+		case T_Grouping:
+			{
+				Grouping	   *grouping = (Grouping *) node;
+				Grouping	   *newnode;
+
+				FLATCOPY(newnode, grouping, Grouping);
+				MUTATE(newnode->args, grouping->args, List *);
+				/* assume no need to copy or mutate the refs list */
+				return (Node *) newnode;
+			}
+			break;
 		case T_WithCheckOption:
 			{
 				WithCheckOption *wco = (WithCheckOption *) node;
@@ -3209,6 +3270,8 @@ raw_expression_tree_walker(Node *node,
 			return walker(((WithClause *) node)->ctes, context);
 		case T_CommonTableExpr:
 			return walker(((CommonTableExpr *) node)->ctequery, context);
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) nodeTag(node));
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index e686a6c..6e4efb4 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -643,6 +643,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -912,6 +914,44 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGrouping(StringInfo str, const Grouping *node)
+{
+	WRITE_NODE_TYPE("GROUPING");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_LOCATION_FIELD(location);
+	WRITE_INT_FIELD(agglevelsup);
+}
+
+static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -2270,6 +2310,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2914,6 +2955,15 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
+			case T_Grouping:
+				_outGrouping(str, obj);
+				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 69d9989..a58e099 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -215,6 +215,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -439,6 +440,53 @@ _readVar(void)
 	READ_DONE();
 }
 
+static Grouping *
+_readGrouping(void)
+{
+	READ_LOCALS(Grouping);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_LOCATION_FIELD(location);
+	READ_INT_FIELD(agglevelsup);
+
+	READ_DONE();
+}
+
+/*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
 /*
  * _readConst
  */
@@ -1320,6 +1368,12 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
+	else if (MATCH("GROUPING", 8))
+		return_value = _readGrouping();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index c81efe9..a16df6f 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1231,6 +1231,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2104,7 +2105,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 773f8a4..e8b6671 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -580,6 +580,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -648,10 +649,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -667,6 +668,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 4b641a2..1a47f0f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1015,6 +1015,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4265,6 +4266,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4294,10 +4296,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index 94ca92d..296b789 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1480cd..f53cc0a 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -22,6 +22,7 @@
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +38,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -77,7 +79,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -315,6 +318,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -531,7 +536,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1187,15 +1193,77 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		int		   *refmap = NULL;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
-		if (parse->groupClause)
-			preprocess_groupclause(root);
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			int			ref = 0;
+			List	   *remaining_sets = NIL;
+			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
+														  parse->sortClause,
+														  &remaining_sets);
+
+			/*
+			 * TODO - if the grouping set list can't be handled as one rollup...
+			 */
+
+			if (remaining_sets != NIL)
+				elog(ERROR, "not implemented yet");
+
+			parse->groupingSets = usable_sets;
+
+			if (parse->groupClause)
+				preprocess_groupclause(root, linitial(parse->groupingSets));
+
+			/*
+			 * Now that we've pinned down an order for the groupClause for this
+			 * list of grouping sets, remap the entries in the grouping sets
+			 * from sortgrouprefs to plain indices into the groupClause.
+			 */
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+
+			refmap = palloc0(sizeof(int) * (maxref + 1));
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				refmap[gc->tleSortGroupRef] = ++ref;
+			}
+
+			foreach(lc, usable_sets)
+			{
+				foreach(lc2, (List *) lfirst(lc))
+				{
+					Assert(refmap[lfirst_int(lc2)] > 0);
+					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+				}
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				preprocess_groupclause(root, NIL);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1257,6 +1325,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
+		if (refmap)
+			pfree(refmap);
+
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1267,6 +1338,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1312,7 +1384,23 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc;
+
+				dNumGroups = 0;
+
+				foreach(lc, parse->groupingSets)
+				{
+					dNumGroups += estimate_num_groups(root,
+													  groupExprs,
+													  path_rows,
+													  (List **) &(lfirst(lc)));
+				}
+			}
+			else
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1338,7 +1426,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
@@ -1360,7 +1448,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1443,13 +1531,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1591,12 +1690,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
 				/* Plain aggregate plan --- sort if needed */
 				AggStrategy aggstrategy;
@@ -1622,7 +1722,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				else
 				{
 					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
+					/* Result will have no sort order */
 					current_pathkeys = NIL;
 				}
 
@@ -1634,6 +1734,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												parse->groupingSets,
 												numGroups,
 												result_plan);
 			}
@@ -1666,27 +1767,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1849,7 +1989,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1890,6 +2030,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2508,6 +2649,7 @@ limit_needed(Query *parse)
 }
 
 
+
 /*
  * preprocess_groupclause - do preparatory work on GROUP BY clause
  *
@@ -2524,18 +2666,32 @@ limit_needed(Query *parse)
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we may need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2543,7 +2699,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2567,7 +2722,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2584,15 +2739,113 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * Extract a list of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass. The order of elements in each returned set is
+ * modified to ensure proper prefix relationships; the sets are returned in
+ * decreasing order of size. (The input must also be in descending order of
+ * size.)
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ *
+ * Sets that can't be accomodated within a rollup that includes the first
+ * (and therefore largest) grouping set in the input are added to the
+ * remainder list.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = linitial(groupingSets);
+	List	   *tmp_result = list_make1(previous);
+	List	   *result = NIL;
+
+	for_each_cell(lc, lnext(list_head(groupingSets)))
+	{
+		List   *candidate = lfirst(lc);
+		bool	ok = true;
+
+		foreach(lc2, candidate)
+		{
+			int ref = lfirst_int(lc2);
+			if (!list_member_int(previous, ref))
+			{
+				ok = false;
+				break;
+			}
+		}
+
+		if (ok)
+		{
+			tmp_result = lcons(candidate, tmp_result);
+			previous = candidate;
+		}
+		else
+			*remainder = lappend(*remainder, candidate);
+	}
+
+	/*
+	 * reorder the list elements so that shorter sets are strict
+	 * prefixes of longer ones, and if we ever have a choice, try
+	 * and follow the sortclause if there is one. (We're trying
+	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
+	 * gets implemented in one pass.)
+	 */
+
+	previous = NIL;
+
+	foreach(lc, tmp_result)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+	list_free(tmp_result);
+
+	return result;
 }
 
 /*
@@ -3040,7 +3293,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 4d717df..346c84d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -647,6 +655,9 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			set_group_vars(root, (Agg *) plan);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1119,6 +1130,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, Grouping))
+	{
+		Grouping   *g = (Grouping *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1246,6 +1282,67 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	int i;
+	Bitmapset *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	context.root = root;
+
+	for (i = 0; i < agg->numCols; ++i)
+		cols = bms_add_member(cols, agg->grpColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref) || IsA(node, Grouping))
+	{
+		/*
+		 * don't recurse into Aggrefs, since they see the values prior
+		 * to grouping.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..e0a2ca7 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given Grouping expression
+ * which is expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, Grouping *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the Grouping belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (Grouping *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,13 +1532,14 @@ simplify_EXISTS_query(Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR UPDATE/SHARE;
-	 * none of these seem likely in normal usage and their possible effects
-	 * are complex.
+	 * aggregates, grouping sets, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1813,6 +1856,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (Grouping *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 9cb1378..cb8aeb6 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..3c71d7f 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 19b5cf7..1152195 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4294,6 +4294,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 319e8b2..a7bbacf 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index b5c6a44..efed20a 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index d4f46b8..c6faf51 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..96ef36c 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -968,6 +968,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1014,7 +1015,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1474,7 +1475,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 327f2d2..b63f2e0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -361,6 +361,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -425,7 +429,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -547,7 +551,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -562,7 +566,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -596,11 +600,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -9817,11 +9821,73 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11400,15 +11466,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  Grouping *g = makeNode(Grouping);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12158,6 +12242,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13057,6 +13148,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13133,12 +13225,14 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
+			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
@@ -13154,6 +13248,7 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
+			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index c984b7d..1c2aca1 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingExpr
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingExpr(ParseState *pstate, Grouping *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	Grouping   *result = makeNode(Grouping);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		Grouping *grp = (Grouping *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, Grouping))
+	{
+		int			agglevelsup = ((Grouping *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,57 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = llast(gsets);
+
+		if (gset_common)
+		{
+			foreach(l, gsets)
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +996,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1030,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1062,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1122,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1186,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/* we handled Grouping separately, no need to recheck at this level. */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1207,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1236,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1275,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1320,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/*
+		 * We only need to check Grouping nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_desc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, longest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 4931dca..5d02579 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,163 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	/* just in case of pathological input */
+	check_stack_depth();
 
-	foreach(gl, grouplist)
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1832,263 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2193,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..0bb8856 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -166,6 +167,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 										InvalidOid, InvalidOid, -1);
 			break;
 
+		case T_Grouping:
+			result = transformGroupingExpr(pstate, (Grouping *) expr);
+			break;
+
 		case T_TypeCast:
 			{
 				TypeCast   *tc = (TypeCast *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 328e0c6..1e48346 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1628,6 +1628,9 @@ FigureColnameInternal(Node *node, char **name)
 				}
 			}
 			break;
+		case T_Grouping:
+			*name = "grouping";
+			return 2;
 		case T_A_Indirection:
 			{
 				A_Indirection *ind = (A_Indirection *) node;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index e6c5530..5c4e201 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2063,7 +2063,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index fb20314..02099a4 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,11 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up)
+			return true;
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +162,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up &&
+			((Grouping *) node)->location >= 0)
+		{
+			context->agg_location = ((Grouping *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -705,6 +719,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		Grouping	   *grp = (Grouping *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..5344736 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -360,9 +360,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -4535,7 +4537,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4560,19 +4562,35 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
 	}
 
@@ -4640,7 +4658,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4859,14 +4877,14 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
@@ -4891,6 +4909,66 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4910,7 +4988,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5010,7 +5088,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5559,10 +5637,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5584,10 +5662,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5607,10 +5685,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5650,10 +5728,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6684,6 +6762,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -7580,6 +7662,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			}
 			break;
 
+		case T_Grouping:
+			{
+				Grouping *gexpr = (Grouping *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_List:
 			{
 				char	   *sep;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index e932ccf..c769e83 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..ee1fe74 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -911,6 +913,16 @@ typedef struct MinMaxExprState
 } MinMaxExprState;
 
 /* ----------------
+ *		GroupingState node
+ * ----------------
+ */
+typedef struct GroupingState
+{
+	ExprState	xprstate;
+	List        *clauses;
+} GroupingState;
+
+/* ----------------
  *		XmlExprState node
  * ----------------
  */
@@ -1701,19 +1713,26 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontext;	/* econtexts for long-lived data */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index e108b85..bd3b2a5 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index a031b88..7998c95 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -115,6 +115,7 @@ typedef enum NodeTag
 	T_SortState,
 	T_GroupState,
 	T_AggState,
+	T_GroupingState,
 	T_WindowAggState,
 	T_UniqueState,
 	T_HashState,
@@ -171,6 +172,9 @@ typedef enum NodeTag
 	T_JoinExpr,
 	T_FromExpr,
 	T_IntoClause,
+	T_GroupedVar,
+	T_Grouping,
+	T_GroupingSet,
 
 	/*
 	 * TAGS FOR EXPRESSION STATE NODES (execnodes.h)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 28029fe..a2fe71c 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -134,6 +134,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of grouping sets if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index c545115..45eacda 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 3b9c683..077ae9f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -631,6 +631,7 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 6d9f3d9..4c03e40 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -159,6 +159,28 @@ typedef struct Var
 	int			location;		/* token location, or -1 if unknown */
 } Var;
 
+/* GroupedVar - expression node representing a grouping set variable.
+ * This is identical to Var node. It is a logical representation of
+ * a grouping set column and is also used during projection of rows
+ * in execution of a query having grouping sets.
+ */
+
+typedef Var GroupedVar;
+
+/*
+ * Grouping
+ */
+typedef struct Grouping
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	int			location;		/* token location */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+} Grouping;
+
 /*
  * Const
  */
@@ -1147,6 +1169,32 @@ typedef struct CurrentOfExpr
 	int			cursor_param;	/* refcursor parameter number, or 0 */
 } CurrentOfExpr;
 
+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
 /*--------------------
  * TargetEntry -
  *	   a target entry (used in query target lists)
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index dacbe9c..33b3beb 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -256,6 +256,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for Grouping fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 4504250..64f3aa3 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 1ebb635..c8b1c93 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 17888ad..e38b6bc 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -322,6 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -340,6 +343,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 3f55ec7..f0607fb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingExpr(ParseState *pstate, Grouping *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index e9e7cdc..58d88f0 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index 0f662ec..9d9c9b3 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..2d121c7
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,361 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index c0416f4..b15119e 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: privileges security_label collate matview lock replica_identity
+test: privileges security_label collate matview lock replica_identity groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 16a1905..5e64468 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..bc571ff
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,128 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- end
gsp2.patchtext/x-patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 479ae7e..aff1a92 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -960,6 +960,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index ad8a3d0..0ac2e70 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index beecd36..48567b9 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -326,6 +326,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -1119,6 +1120,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1126,7 +1129,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1137,22 +1139,45 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (!node->agg_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	if (!node->chain_done)
+	{
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
+	}
+
+	return NULL;
 }
 
 /*
@@ -1473,6 +1498,161 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that
+	 * might have been generated by previous rows, and if this is not
+	 * the first row, then grp_firsttuple has the representative input
+	 * row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontext[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontext[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1640,6 +1820,7 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
@@ -1672,9 +1853,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
 	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
 
 	if (node->groupingSets)
 	{
@@ -1734,6 +1920,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecInitResultTupleSlot(estate, &aggstate->ss.ps);
 	aggstate->hashslot = ExecInitExtraTupleSlot(estate);
 
+
 	/*
 	 * initialize child expressions
 	 *
@@ -1743,12 +1930,40 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		Assert(estate->agg_chain_head);
+
+		aggstate->chain_head = estate->agg_chain_head;
+		aggstate->chain_head->chain_depth++;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_head)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
 	 * initialize child nodes
@@ -1761,6 +1976,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_head)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1769,8 +1989,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -2225,6 +2472,9 @@ ExecEndAgg(AggState *node)
 	for (i = 0; i < numGroupingSets; ++i)
 		ReScanExprContext(node->aggcontext[i]);
 
+	if (node->chain_tuplestore && !node->chain_head)
+		tuplestore_end(node->chain_tuplestore);
+
 	/*
 	 * We don't actually free any ExprContexts here (see comment in
 	 * ExecFreeExprContext), just unlinking the output one from the plan node
@@ -2339,11 +2589,54 @@ ExecReScanAgg(AggState *node)
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed. (We assume that either all children
+	 * in the chain are protected or none are; since all Sort nodes in the
+	 * chain should have the same flags. If this changes, it would probably be
+	 * necessary to add a signalling param to force child rescan.)
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_head)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan == node->chain_depth)
+				tuplestore_clear(node->chain_tuplestore);
+			else if (node->chain_rescan == 0)
+				tuplestore_rescan(node->chain_tuplestore);
+			else
+				elog(ERROR, "chained aggregate rescan depth error");
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
+
+
+
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 72dc86b..e5b5600 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -772,6 +772,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_head);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 6e4efb4..279d8b9 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -632,6 +632,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_BOOL_FIELD(chain_head);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 1a47f0f..96ea58f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1016,6 +1016,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 groupColIdx,
 								 groupOperators,
 								 NIL,
+								 false,
 								 numGroups,
 								 subplan);
 	}
@@ -4266,7 +4267,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
-		 List *groupingSets,
+		 List *groupingSets, bool chain_head,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4276,6 +4277,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_head = chain_head;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4320,8 +4322,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_head);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f53cc0a..2fca072 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -67,6 +67,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -1180,11 +1181,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1194,7 +1190,14 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
 		int			maxref = 0;
-		int		   *refmap = NULL;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
@@ -1205,33 +1208,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		if (parse->groupingSets)
 			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
 
-		if (parse->groupingSets)
+		if (parse->groupClause)
 		{
 			ListCell   *lc;
-			ListCell   *lc2;
-			int			ref = 0;
-			List	   *remaining_sets = NIL;
-			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
-														  parse->sortClause,
-														  &remaining_sets);
-
-			/*
-			 * TODO - if the grouping set list can't be handled as one rollup...
-			 */
-
-			if (remaining_sets != NIL)
-				elog(ERROR, "not implemented yet");
-
-			parse->groupingSets = usable_sets;
-
-			if (parse->groupClause)
-				preprocess_groupclause(root, linitial(parse->groupingSets));
-
-			/*
-			 * Now that we've pinned down an order for the groupClause for this
-			 * list of grouping sets, remap the entries in the grouping sets
-			 * from sortgrouprefs to plain indices into the groupClause.
-			 */
 
 			foreach(lc, parse->groupClause)
 			{
@@ -1239,29 +1218,61 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				if (gc->tleSortGroupRef > maxref)
 					maxref = gc->tleSortGroupRef;
 			}
+		}
 
-			refmap = palloc0(sizeof(int) * (maxref + 1));
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			List	   *sets = parse->groupingSets;
 
-			foreach(lc, parse->groupClause)
+			do
 			{
-				SortGroupClause *gc = lfirst(lc);
-				refmap[gc->tleSortGroupRef] = ++ref;
-			}
+				List   *remaining_sets = NIL;
+				List   *usable_sets = extract_rollup_sets(sets,
+														  parse->sortClause,
+														  &remaining_sets);
+				List   *groupclause = preprocess_groupclause(root, linitial(usable_sets));
+				int		ref = 0;
+				int	   *refmap;
 
-			foreach(lc, usable_sets)
-			{
-				foreach(lc2, (List *) lfirst(lc))
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
 				{
-					Assert(refmap[lfirst_int(lc2)] > 0);
-					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
 				}
+
+				foreach(lc, usable_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(usable_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+
+				sets = remaining_sets;
 			}
+			while (sets);
 		}
 		else
 		{
 			/* Preprocess GROUP BY clause, if any */
 			if (parse->groupClause)
-				preprocess_groupclause(root, NIL);
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
 		}
 
 		numGroupCols = list_length(parse->groupClause);
@@ -1325,9 +1336,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
-		if (refmap)
-			pfree(refmap);
-
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1350,6 +1358,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1376,6 +1385,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
@@ -1411,6 +1421,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1434,6 +1447,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			 * set to 1).
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1614,7 +1629,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1689,8 +1704,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
 												NIL,
+												false,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
@@ -1698,45 +1714,94 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			}
 			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				bool		is_chained = false;
+
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
 
-				if (parse->groupClause)
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						/* need to remap groupColIdx */
+
+						if (gsets)
+						{
+							Assert(refmap);
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+							if (!is_chained)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													is_chained && (aggstrategy != AGG_CHAINED),
+													numGroups,
+													result_plan);
+
+					is_chained = true;
+
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
 				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will have no sort order */
-					current_pathkeys = NIL;
-				}
-
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												parse->groupingSets,
-												numGroups,
-												result_plan);
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -2031,6 +2096,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
 											NIL,
+											false,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2864,11 +2930,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 346c84d..2be5f29 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -655,8 +655,16 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
-			set_upper_references(root, plan, rtoffset);
-			set_group_vars(root, (Agg *) plan);
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
 			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
@@ -1288,21 +1296,30 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
  *    Modify any Var references in the target list of a non-trivial
  *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
  *    which will conditionally replace them with nulls at runtime.
+ *    Also fill in the cols list of any GROUPING() node.
  */
 static void
 set_group_vars(PlannerInfo *root, Agg *agg)
 {
 	set_group_vars_context context;
-	int i;
-	Bitmapset *cols = NULL;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
 
 	if (!agg->groupingSets)
 		return;
 
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
 	context.root = root;
 
-	for (i = 0; i < agg->numCols; ++i)
-		cols = bms_add_member(cols, agg->grpColIdx[i]);
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
 
 	context.groupedcols = cols;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index e0a2ca7..e5befe3 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -2091,7 +2092,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2142,7 +2143,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2351,7 +2352,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2367,7 +2369,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2383,7 +2386,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2399,7 +2403,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2415,7 +2420,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2482,8 +2488,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_head)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2500,7 +2528,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2509,7 +2538,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2520,7 +2550,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 3c71d7f..ce35226 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -774,6 +774,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
 								 NIL,
+								 false,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ee1fe74..cbc7b0c 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -409,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -1729,6 +1734,7 @@ typedef struct AggState
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
 	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
 	int			projected_set;	/* The last projected grouping set */
 	int			current_set;	/* The current grouping set being evaluated */
 	Bitmapset **grouped_cols;   /* column groupings for rollup */
@@ -1742,6 +1748,10 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 077ae9f..d558ff8 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -620,6 +620,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -627,6 +628,7 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	bool		chain_head;
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 64f3aa3..20b7493 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -59,6 +59,7 @@ extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
 		 List *groupingSets,
+		 bool chain_head,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 2d121c7..d426018 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -281,6 +281,29 @@ select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (valu
 (3 rows)
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  a | b 
 ---+---
@@ -288,6 +311,99 @@ select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  2 | 3
 (2 rows)
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1));
+ grouping 
+----------
+        0
+        0
+(2 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+(12 rows)
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
@@ -358,4 +474,87 @@ group by rollup(ten);
      |    
 (11 rows)
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |     |  1000
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |     |  1000
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four)) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,,1000)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,,1000)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)"}
+(2 rows)
+
 -- end
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index bc571ff..5404cb6 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -108,8 +108,22 @@ select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2))
 select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1));
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b);
+
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 
@@ -125,4 +139,8 @@ having exists (select 1 from onek b where sum(distinct a.four) = b.four);
 select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
 group by rollup(ten);
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four)) s1) from (values (1),(2)) v(a);
+
 -- end
gsp-doc.patchtext/x-patchDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c715ca2..a17a4a3 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -11989,7 +11989,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13034,6 +13036,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index c5e8aef..50dd8cb 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1141,6 +1141,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 231dc6a..0736586 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -619,23 +628,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group
     (whereas without <literal>GROUP BY</literal>, an aggregate
gsp-contrib.patchtext/x-patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 6859682..eacffce 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
gsp-u.patchtext/x-patchDownload
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b63f2e0..86c7247 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -662,6 +662,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -672,7 +677,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -9867,6 +9872,12 @@ empty_grouping_set:
 				}
 		;
 
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
 rollup_clause:
 			ROLLUP '(' expr_list ')'
 				{
@@ -12988,6 +12999,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13134,6 +13146,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13225,7 +13238,6 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
-			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
@@ -13248,7 +13260,6 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
-			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 5344736..e170964 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -4888,12 +4888,13 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4902,8 +4903,27 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index e38b6bc..5ea1067 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,7 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
-PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -324,7 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
-PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
#34Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Robert Haas (#26)
Re: WIP Patch for GROUPING SETS phase 1

"Robert" == Robert Haas <robertmhaas@gmail.com> writes:

Robert> I can accept ugly code, but I feel strongly that we shouldn't
Robert> accept ugly semantics. Forcing cube to get out of the way
Robert> may not be pretty, but I think it will be much worse if we
Robert> violate the rule that quoting a keyword strips it of its
Robert> special meaning; or the rule that there are four kinds of
Robert> keywords and, if a keyword of a particular class is accepted
Robert> as an identifier in a given context, all other keywords in
Robert> that class will also be accepted as identifiers in that
Robert> context. Violating those rules could have not-fun-at-all
Robert> consequences like needing to pass additional context
Robert> information to ruleutils.c's quote_identifier() function, or
Robert> not being able to dump and restore a query from an older
Robert> version even with --quote-all-identifiers. Renaming the cube
Robert> type will suck for people who are using it; but it will only
Robert> have to be done once; weird stuff like the above will be with
Robert> us forever.

If you look at the latest patch post, there's a small patch in it that
does nothing but unreserve the keywords and fix ruleutils to make
deparse/parse work. The required fix to ruleutils is an example of
violating your "four kinds of keywords" principle, but quoting
keywords still works.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Gierth (#33)
Re: Final Patch for GROUPING SETS

Hi

I checked this patch, and it working very well

I found only two issue - I am not sure if it is issue

with data from https://wiki.postgresql.org/wiki/Grouping_Sets

postgres=# select name, place, sum(count), grouping(name), grouping(place)
from cars group by rollup(name, place);
name | place | sum | grouping | grouping
-------+------------+-------+----------+----------
bmw | czech rep. | 100 | 0 | 0
bmw | germany | 1000 | 0 | 0
bmw | | 1100 | 0 | 1
opel | czech rep. | 7000 | 0 | 0
opel | germany | 7000 | 0 | 0
opel | | 14000 | 0 | 1
skoda | czech rep. | 10000 | 0 | 0
skoda | germany | 5000 | 0 | 0
skoda | | 15000 | 0 | 1
| | 30100 | 1 | 1
(10 rows)

* redundant sets should be ignored

postgres=# select name, place, sum(count), grouping(name), grouping(place)
from cars group by rollup(name, place), name;
name | place | sum | grouping | grouping
-------+------------+-------+----------+----------
bmw | czech rep. | 100 | 0 | 0
bmw | germany | 1000 | 0 | 0
bmw | | 1100 | 0 | 1
bmw | | 1100 | 0 | 1
opel | czech rep. | 7000 | 0 | 0
opel | germany | 7000 | 0 | 0
opel | | 14000 | 0 | 1
opel | | 14000 | 0 | 1
skoda | czech rep. | 10000 | 0 | 0
skoda | germany | 5000 | 0 | 0
skoda | | 15000 | 0 | 1
skoda | | 15000 | 0 | 1
(12 rows)

It duplicate rows

postgres=# explain select name, place, sum(count), grouping(name),
grouping(place) from cars group by rollup(name, place), name;
QUERY PLAN
------------------------------------------------------------------------
GroupAggregate (cost=10000000001.14..10000000001.38 rows=18 width=68)
Grouping Sets: (name, place), (name), (name)
-> Sort (cost=10000000001.14..10000000001.15 rows=6 width=68)
Sort Key: name, place
-> Seq Scan on cars (cost=0.00..1.06 rows=6 width=68)
Planning time: 0.235 ms
(6 rows)

postgres=# select name, place, sum(count), grouping(name), grouping(place)
from cars group by grouping sets((name, place), (name), (name),(place), ());
name | place | sum | grouping | grouping
-------+------------+-------+----------+----------
bmw | czech rep. | 100 | 0 | 0
bmw | germany | 1000 | 0 | 0
bmw | | 1100 | 0 | 1
bmw | | 1100 | 0 | 1
opel | czech rep. | 7000 | 0 | 0
opel | germany | 7000 | 0 | 0
opel | | 14000 | 0 | 1
opel | | 14000 | 0 | 1
skoda | czech rep. | 10000 | 0 | 0
skoda | germany | 5000 | 0 | 0
skoda | | 15000 | 0 | 1
skoda | | 15000 | 0 | 1
| | 30100 | 1 | 1
| czech rep. | 17100 | 1 | 0
| germany | 13000 | 1 | 0
(15 rows)

Fantastic work

Regards

Pavel

2014-08-25 7:21 GMT+02:00 Andrew Gierth <andrew@tao11.riddles.org.uk>:

Show quoted text

Here is the new version of our grouping sets patch. This version
supersedes the previous post.

We believe the functionality of this version to be substantially
complete, providing all the standard grouping set features except T434
(GROUP BY DISTINCT). (Additional tweaks, such as extra variants on
GROUPING(), could be added for compatibility with other databases.)

Since the debate regarding reserved keywords has not produced any
useful answer, the main patch here makes CUBE and ROLLUP into
col_name_reserved keywords, but a separate small patch is attached to
make them unreserved_keywords instead.

So there are now 5 files:

gsp1.patch - phase 1 code patch (full syntax, limited
functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and
contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#36Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Pavel Stehule (#35)
Re: Final Patch for GROUPING SETS

"Pavel" == Pavel Stehule <pavel.stehule@gmail.com> writes:

Pavel> Hi
Pavel> I checked this patch, and it working very well

Pavel> I found only two issue - I am not sure if it is issue

Pavel> It duplicate rows

Pavel> postgres=# explain select name, place, sum(count), grouping(name),
Pavel> grouping(place) from cars group by rollup(name, place), name;
Pavel> QUERY PLAN
Pavel> ------------------------------------------------------------------------
Pavel> GroupAggregate (cost=10000000001.14..10000000001.38 rows=18 width=68)
Pavel> Grouping Sets: (name, place), (name), (name)

I think I can safely claim from the spec that our version is correct.
Following the syntactic transformations given in 7.9 <group by clause>
of sql2008, we have:

GROUP BY rollup(name,place), name;

parses as GROUP BY <rollup list>, <ordinary grouping set>

Syntax rule 13 replaces the <rollup list> giving:

GROUP BY GROUPING SETS ((name,place), (name), ()), name;

Syntax rule 16b gives:

GROUP BY GROUPING SETS ((name,place), (name), ()), GROUPING SETS (name);

Syntax rule 16c takes the cartesian product of the two sets:

GROUP BY GROUPING SETS ((name,place,name), (name,name), (name));

Syntax rule 17 gives:

SELECT ... GROUP BY name,place,name
UNION ALL
SELECT ... GROUP BY name,name
UNION ALL
SELECT ... GROUP BY name

Obviously at this point the extra "name" columns become redundant so
we eliminate them (this doesn't correspond to a spec rule, but doesn't
change the semantics). So we're left with:

SELECT ... GROUP BY name,place
UNION ALL
SELECT ... GROUP BY name
UNION ALL
SELECT ... GROUP BY name

Running a quick test on sqlfiddle with Oracle 11 suggests that Oracle's
behavior agrees with my interpretation.

Nothing in the spec that I can find licenses the elimination of
duplicate grouping sets except indirectly via feature T434 (GROUP BY
DISTINCT ...), which we did not attempt to implement.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#37Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Gierth (#36)
Re: Final Patch for GROUPING SETS

2014-08-26 2:45 GMT+02:00 Andrew Gierth <andrew@tao11.riddles.org.uk>:

"Pavel" == Pavel Stehule <pavel.stehule@gmail.com> writes:

Pavel> Hi
Pavel> I checked this patch, and it working very well

Pavel> I found only two issue - I am not sure if it is issue

Pavel> It duplicate rows

Pavel> postgres=# explain select name, place, sum(count), grouping(name),
Pavel> grouping(place) from cars group by rollup(name, place), name;
Pavel> QUERY PLAN
Pavel>
------------------------------------------------------------------------
Pavel> GroupAggregate (cost=10000000001.14..10000000001.38 rows=18
width=68)
Pavel> Grouping Sets: (name, place), (name), (name)

I think I can safely claim from the spec that our version is correct.
Following the syntactic transformations given in 7.9 <group by clause>
of sql2008, we have:

GROUP BY rollup(name,place), name;

parses as GROUP BY <rollup list>, <ordinary grouping set>

Syntax rule 13 replaces the <rollup list> giving:

GROUP BY GROUPING SETS ((name,place), (name), ()), name;

Syntax rule 16b gives:

GROUP BY GROUPING SETS ((name,place), (name), ()), GROUPING SETS (name);

Syntax rule 16c takes the cartesian product of the two sets:

GROUP BY GROUPING SETS ((name,place,name), (name,name), (name));

Syntax rule 17 gives:

SELECT ... GROUP BY name,place,name
UNION ALL
SELECT ... GROUP BY name,name
UNION ALL
SELECT ... GROUP BY name

Obviously at this point the extra "name" columns become redundant so
we eliminate them (this doesn't correspond to a spec rule, but doesn't
change the semantics). So we're left with:

SELECT ... GROUP BY name,place
UNION ALL
SELECT ... GROUP BY name
UNION ALL
SELECT ... GROUP BY name

Running a quick test on sqlfiddle with Oracle 11 suggests that Oracle's
behavior agrees with my interpretation.

Nothing in the spec that I can find licenses the elimination of
duplicate grouping sets except indirectly via feature T434 (GROUP BY
DISTINCT ...), which we did not attempt to implement.

ok, I'll try to search in my memory to find some indices, so redundant
columns should be reduced,

Regards

Pavel

Show quoted text

--
Andrew (irc:RhodiumToad)

#38Erik Rijkers
er@xs4all.nl
In reply to: Andrew Gierth (#33)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Mon, August 25, 2014 07:21, Andrew Gierth wrote:

Here is the new version of our grouping sets patch. This version
supersedes the previous post.

The patches did not apply anymore so I applied at 73eba19aebe0. There they applied OK, and make && make check was OK.

drop table if exists items_sold;
create table items_sold as
select * from (
values
('Foo', 'L', 10)
, ('Foo', 'M', 20)
, ('Bar', 'M', 15)
, ('Bar', 'L', 5)
) as f(brand, size, sales) ;

select brand, size, grouping(brand, size), sum(sales) from items_sold group by rollup(brand, size);
--> WARNING: unrecognized node type: 347

I suppose that's not correct.

thanks,

Erik Rijkers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Erik Rijkers (#38)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

Erik> The patches did not apply anymore so I applied at 73eba19aebe0.
Erik> There they applied OK, and make && make check was OK.

I'll look and rebase if need be.

--> WARNING: unrecognized node type: 347

Can't reproduce this - are you sure it's not a mis-build?

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#40Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#39)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

Erik> The patches did not apply anymore so I applied at 73eba19aebe0.
Erik> There they applied OK, and make && make check was OK.

Andrew> I'll look and rebase if need be.

They apply cleanly for me at 2bde297 whether with git apply or patch,
except for the contrib one (which you don't need unless you want to
run the contrib regression tests without applying the gsp-u patch).

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#41Erik Rijkers
er@xs4all.nl
In reply to: Andrew Gierth (#40)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Tue, August 26, 2014 11:13, Andrew Gierth wrote:

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

Erik> The patches did not apply anymore so I applied at 73eba19aebe0.
Erik> There they applied OK, and make && make check was OK.

Andrew> I'll look and rebase if need be.

They apply cleanly for me at 2bde297 whether with git apply or patch,
except for the contrib one (which you don't need unless you want to
run the contrib regression tests without applying the gsp-u patch).

Ah, I had not realised that. Excluding that contrib-patch and only applying these three:

gsp1.patch
gsp2.patch
gsp-doc.patch

does indeed work (applies, compiles).

Thank you,

Erik Rijkers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#42Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#33)
1 attachment(s)
Re: Final Patch for GROUPING SETS

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

Andrew> gsp-contrib.patch - quote "cube" in contrib/cube and
Andrew> contrib/earthdistance, intended primarily for testing pending
Andrew> a decision on renaming contrib/cube or unreserving keywords

Here's a rebase of this one patch. Note that you only need this if
you're NOT applying the gsp-u patch to unreserve keywords, and you
also don't need it if you're not planning to test the cube extension
compatibility with grouping sets.

--
Andrew (irc:RhodiumToad)

Attachments:

gsp-contrib.patchtext/x-patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 1065512..acacb61 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube FROM unpackaged" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube" FROM unpackaged" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
#43Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Erik Rijkers (#41)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

They apply cleanly for me at 2bde297 whether with git apply or
patch, except for the contrib one (which you don't need unless you
want to run the contrib regression tests without applying the
gsp-u patch).

Erik> Ah, I had not realised that. Excluding that contrib-patch and
Erik> only applying these three:

Erik> gsp1.patch
Erik> gsp2.patch
Erik> gsp-doc.patch

Erik> does indeed work (applies, compiles).

I put up a rebased contrib patch anyway (linked off the CF).

Did the "unrecognized node type" error go away, or do we still need to
look into that?

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#44Erik Rijkers
er@xs4all.nl
In reply to: Andrew Gierth (#43)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Tue, August 26, 2014 14:24, Andrew Gierth wrote:

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

They apply cleanly for me at 2bde297 whether with git apply or
patch, except for the contrib one (which you don't need unless you
want to run the contrib regression tests without applying the
gsp-u patch).

Erik> Ah, I had not realised that. Excluding that contrib-patch and
Erik> only applying these three:

Erik> gsp1.patch
Erik> gsp2.patch
Erik> gsp-doc.patch

Erik> does indeed work (applies, compiles).

I put up a rebased contrib patch anyway (linked off the CF).

Did the "unrecognized node type" error go away, or do we still need to
look into that?

Yes, it did go away; looks fine now:

select brand , size , grouping(brand, size) , sum(sales) from items_sold group by rollup(brand, size) ;
brand | size | grouping | sum
-------+------+----------+-----
Bar | L | 0 | 5
Bar | M | 0 | 15
Bar | | 1 | 20
Foo | L | 0 | 10
Foo | M | 0 | 20
Foo | | 1 | 30
| | 3 | 50
(7 rows)

I'm a bit unclear why the bottom-row 'grouping' value is 3. Shouldn't that be 2?

But I'm still reading the documentation so it's perhaps too early to ask...

Thanks,

Erik Rijkers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#45Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Gierth (#34)
Re: WIP Patch for GROUPING SETS phase 1

On Mon, Aug 25, 2014 at 1:35 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

If you look at the latest patch post, there's a small patch in it that
does nothing but unreserve the keywords and fix ruleutils to make
deparse/parse work. The required fix to ruleutils is an example of
violating your "four kinds of keywords" principle, but quoting
keywords still works.

I think it would be intolerable to lose the ability to quote keywords.
That could easily create situations where there's no reasonable way to
dump an older database in such a fashion that it can be reloaded into
a newer database. So it's good that you avoided that.

The "four kinds of keywords" principle is obviously much less
absolute. We've talked before about introducing additional categories
of keywords, and that might be a good thing to do for one reason or
another. But I think it's not good to do it in a highly idiosyncratic
way: I previously proposed reserving concurrently only when it follows
CREATE INDEX, and not in any other context, but Tom argued that it had
to become a type_func_name_keyword since users would be confused to
find that concurrently (but not any other keyword) needed quoting
there. In retrospect, I tend to think he probably had it right.
There is a good amount of third-party software out there that tries to
be smart about quoting PostgreSQL keywords - for example, pgAdmin has
code for that, or did last I looked - so by making things more
complicated, we run the risk not only of bugs in our own software but
also bugs in other people's software, as well as user confusion. So I
still think the right solution is probably to reserve CUBE across the
board, and not just in the narrowest context that we can get away
with.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#46Erik Rijkers
er@xs4all.nl
In reply to: Andrew Gierth (#43)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Tue, August 26, 2014 14:24, Andrew Gierth wrote:

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

They apply cleanly for me at 2bde297 whether with git apply or
patch, except for the contrib one (which you don't need unless you
want to run the contrib regression tests without applying the
gsp-u patch).

Erik> Ah, I had not realised that. Excluding that contrib-patch and
Erik> only applying these three:

Erik> gsp1.patch
Erik> gsp2.patch
Erik> gsp-doc.patch

Erik> does indeed work (applies, compiles).

I put up a rebased contrib patch anyway (linked off the CF).

Did the "unrecognized node type" error go away, or do we still need to
look into that?

I have found that the "unrecognized node type" error is caused by:

shared_preload_libraries = pg_stat_statements

in postgresql.conf (as my default compile script was doing).

If I disable that line the error goes away.

I don't know exactly what that means for the groping sets patches but I thought I'd mention it here.

Otherwise I've not run into any problems with GROUPING SETS.

Erik Rijkers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47Atri Sharma
atri.jiit@gmail.com
In reply to: Erik Rijkers (#46)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Sun, Aug 31, 2014 at 9:07 PM, Erik Rijkers <er@xs4all.nl> wrote:

On Tue, August 26, 2014 14:24, Andrew Gierth wrote:

"Erik" == Erik Rijkers <er@xs4all.nl> writes:

They apply cleanly for me at 2bde297 whether with git apply or
patch, except for the contrib one (which you don't need unless you
want to run the contrib regression tests without applying the
gsp-u patch).

Erik> Ah, I had not realised that. Excluding that contrib-patch and
Erik> only applying these three:

Erik> gsp1.patch
Erik> gsp2.patch
Erik> gsp-doc.patch

Erik> does indeed work (applies, compiles).

I put up a rebased contrib patch anyway (linked off the CF).

Did the "unrecognized node type" error go away, or do we still need to
look into that?

I have found that the "unrecognized node type" error is caused by:

shared_preload_libraries = pg_stat_statements

in postgresql.conf (as my default compile script was doing).

If I disable that line the error goes away.

I think thats more of a library linking problem rather than a problem with
the patch. I couldnt reproduce it,though.

Regards,

Atri

--
Regards,

Atri
*l'apprenant*

#48Andres Freund
andres@2ndquadrant.com
In reply to: Atri Sharma (#47)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On 2014-08-31 21:09:59 +0530, Atri Sharma wrote:

On Sun, Aug 31, 2014 at 9:07 PM, Erik Rijkers <er@xs4all.nl> wrote:

I have found that the "unrecognized node type" error is caused by:

It's a warning, not an error, right?

shared_preload_libraries = pg_stat_statements

in postgresql.conf (as my default compile script was doing).

If I disable that line the error goes away.

I think thats more of a library linking problem rather than a problem with
the patch. I couldnt reproduce it,though.

I think it's vastly more likely that the patch simply didn't add the new
expression types to pg_stat_statements.c:JumbleExpr().

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Atri Sharma
atri.jiit@gmail.com
In reply to: Andres Freund (#48)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On Sunday, August 31, 2014, Andres Freund <andres@2ndquadrant.com> wrote:

On 2014-08-31 21:09:59 +0530, Atri Sharma wrote:

On Sun, Aug 31, 2014 at 9:07 PM, Erik Rijkers <er@xs4all.nl

<javascript:;>> wrote:

I have found that the "unrecognized node type" error is caused by:

It's a warning, not an error, right?

shared_preload_libraries = pg_stat_statements

in postgresql.conf (as my default compile script was doing).

If I disable that line the error goes away.

I think thats more of a library linking problem rather than a problem

with

the patch. I couldnt reproduce it,though.

I think it's vastly more likely that the patch simply didn't add the new
expression types to pg_stat_statements.c:JumbleExpr().

Must have run the above diagnosis in a wrong manner then, I will
check.Thanks for the heads up!

Regards,

Atri

--
Regards,

Atri
*l'apprenant*

#50Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Atri Sharma (#49)
5 attachment(s)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

Recut patches:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

(the contrib patch is not necessary if the -u patch is used; the
contrib/pg_stat_statements fixes are in the phase1 patch)

--
Andrew (irc:RhodiumToad)

Attachments:

gsp1.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 799242b..9419656 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2655,6 +2656,28 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, rtfunc->funcexpr);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
+		case T_Grouping:
+			{
+				Grouping *grpnode = (Grouping *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
+		case T_IntList:
+			{
+				foreach(temp, (List *) node)
+				{
+					APP_JUMB(lfirst_int(temp));
+				}
+			}
+			break;
 		default:
 			/* Only a warning, since we can stumble along anyway */
 			elog(WARNING, "unrecognized node type: %d",
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 781a736..479ae7e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -78,6 +78,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1778,17 +1781,80 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	List	   *result = NIL;
+	bool		useprefix;
+	char	   *exprstr;
+	StringInfoData buf;
+	ListCell   *lc;
+	ListCell   *lc2;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	foreach(lc, gsets)
+	{
+		char *sep = "";
+
+		initStringInfo(&buf);
+		appendStringInfoString(&buf, "(");
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			appendStringInfoString(&buf, sep);
+			appendStringInfoString(&buf, exprstr);
+			sep = ", ";
+		}
+
+		appendStringInfoString(&buf, ")");
+
+		result = lappend(result, buf.data);
+	}
+
+	ExplainPropertyList(qlabel, result, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 7cfa63f..5fb61b0 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,8 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingExpr(GroupingState *gstate, ExprContext *econtext,
+								  bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -568,6 +572,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -645,8 +651,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -694,6 +716,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -2987,6 +3034,40 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingExpr
+ * Return a bitmask with a bit for each column.
+ * A bit is set if the column is not a part of grouping.
+ */
+
+static Datum
+ExecEvalGroupingExpr(GroupingState *gstate,
+					 ExprContext *econtext,
+					 bool *isNull,
+					 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int current_val= 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		current_val = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(current_val, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4385,6 +4466,32 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
+		case T_Grouping:
+			{
+				Grouping	   *grp_node = (Grouping *) node;
+				GroupingState  *grp_state = makeNode(GroupingState);
+				Agg			   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingExpr;
+			}
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index d5e1273..ad8a3d0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -653,7 +653,7 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	 * because those do not represent expressions to be evaluated within the
 	 * overall targetlist's econtext.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, Grouping))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 510d1c5..beecd36 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
 	 * rest.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +304,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReinitialize);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReinitialize)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         i = 0;
+
+	if (numReinitialize < 1)
+		numReinitialize = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (i = 0; i < numReinitialize; i++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstate[i])
+					tuplesort_end(peraggstate->sortstate[i]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstate[i] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
+		/* If ROLLUP is present, we need to iterate over all the groups
+		 * that are present with the current aggstate. If ROLLUP is not
+		 * present, we only have one groupstate associated with the
+		 * current aggstate.
 		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+
+		for (i = 0; i < numReinitialize; i++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (i * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +476,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +524,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +551,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         groupno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +601,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstate[groupno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstate[groupno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +626,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (groupno * numAggs)];
+
+				aggstate->current_set = groupno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +656,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +678,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +690,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstate[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +734,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +745,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +764,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstate[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,8 +818,8 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -832,7 +871,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +955,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, Grouping))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +986,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontext[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1097,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1131,7 +1171,13 @@ agg_retrieve_direct(AggState *aggstate)
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	int			   aggno;
+	bool           hasRollup = aggstate->numsets > 0;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentGroup = 0;
+	int            currentSize = 0;
+	int            numReset = 1;
+	int            i;
 
 	/*
 	 * get state info from node
@@ -1150,131 +1196,233 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
 		 * aggregate functions store working state in such contexts.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
+		 * any registered shutdown callbacks to be called.	That allows
 		 * aggregate functions to ensure they've cleaned up any non-memory
 		 * resources.
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontext[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+		}
 
-		/*
-		 * Initialize working state for a new input tuple group
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
+
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			currentSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			currentSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& currentSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									currentSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			++aggstate->projected_set;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(currentSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * we no longer care what group we just projected, the next projection
+			 * will always be the first (or only) grouping set (unless the input
+			 * proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch it
+			 * from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+				else
+				{
+					/* outer plan produced no tuples at all */
+					if (hasRollup)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasRollup)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentGroup = aggstate->projected_set;
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentGroup * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1292,6 +1440,9 @@ agg_retrieve_direct(AggState *aggstate)
 							   &aggvalues[aggno], &aggnulls[aggno]);
 		}
 
+		if (hasRollup)
+			econtext->grouped_cols = aggstate->grouped_cols[currentGroup];
+
 		/*
 		 * Check the qual (HAVING clause); if the group does not match, ignore
 		 * it and loop back to try to process another group.
@@ -1495,6 +1646,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int        numGroupingSets = 1;
+	int        currentsortno = 0;
+	int        i = 0;
+	int        j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1663,69 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
 
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontext = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
+
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts replaces the standalone memory context
+	 * formerly used to hold transition values.  We cheat a little by using
+	 * ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontext and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontext[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1645,7 +1831,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +1895,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstate = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstate[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2206,35 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			i = 0;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (i = 0; i < numGroupingSets; i++)
+		{
+			if (peraggstate->sortstate[i])
+				tuplesort_end(peraggstate->sortstate[i]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (i = 0; i < numGroupingSets; ++i)
+		ReScanExprContext(node->aggcontext[i]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2243,17 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         groupno;
+	int         i;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2279,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (groupno = 0; groupno < numGroupingSets; groupno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstate[groupno])
+			{
+				tuplesort_end(peraggstate->sortstate[groupno]);
+				peraggstate->sortstate[groupno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ReScanExprContext(node->aggcontext[i]);
+		MemoryContextDeleteChildren(node->aggcontext[i]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2096,21 +2315,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,7 +2333,9 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
@@ -2150,8 +2363,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2375,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2463,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index aa053a0..8ce6411 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -779,6 +779,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1065,6 +1066,59 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGrouping
+ */
+static Grouping *
+_copyGrouping(const Grouping *from)
+{
+	Grouping		   *newnode = makeNode(Grouping);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_LOCATION_FIELD(location);
+	COPY_SCALAR_FIELD(agglevelsup);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupingSet
+ */
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -2495,6 +2549,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4079,6 +4134,15 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
+		case T_Grouping:
+			retval = _copyGrouping(from);
+			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 719923e..0366088 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,47 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGrouping(const Grouping *a, const Grouping *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_LOCATION_FIELD(location);
+	COMPARE_SCALAR_FIELD(agglevelsup);
+
+	return true;
+}
+
+static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -864,6 +905,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2556,6 +2598,15 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
+		case T_Grouping:
+			retval = _equalGrouping(a, b);
+			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 5c09d2f..f878d1f 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index da59c58..e930cef 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 41e973b..6a63d1b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,12 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_Grouping:
+			type = INT4OID;
+			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -261,6 +267,10 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_Grouping:
+			return -1;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +744,12 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_Grouping:
+			coll = InvalidOid;
+			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -967,6 +983,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -1003,6 +1022,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_BoolExpr:
 			Assert(!OidIsValid(collation));		/* result is always boolean */
 			break;
+		case T_Grouping:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_SubLink:
 #ifdef USE_ASSERT_CHECKING
 			{
@@ -1182,6 +1204,15 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_Grouping:
+			loc = ((const Grouping *) expr)->location;
+			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1622,6 +1653,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1687,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_Grouping:
+			{
+				Grouping   *grouping = (Grouping *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2185,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2162,6 +2212,17 @@ expression_tree_mutator(Node *node,
 		case T_RangeTblRef:
 		case T_SortGroupClause:
 			return (Node *) copyObject(node);
+		case T_Grouping:
+			{
+				Grouping	   *grouping = (Grouping *) node;
+				Grouping	   *newnode;
+
+				FLATCOPY(newnode, grouping, Grouping);
+				MUTATE(newnode->args, grouping->args, List *);
+				/* assume no need to copy or mutate the refs list */
+				return (Node *) newnode;
+			}
+			break;
 		case T_WithCheckOption:
 			{
 				WithCheckOption *wco = (WithCheckOption *) node;
@@ -3209,6 +3270,8 @@ raw_expression_tree_walker(Node *node,
 			return walker(((WithClause *) node)->ctes, context);
 		case T_CommonTableExpr:
 			return walker(((CommonTableExpr *) node)->ctequery, context);
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) nodeTag(node));
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index e686a6c..6e4efb4 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -643,6 +643,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -912,6 +914,44 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGrouping(StringInfo str, const Grouping *node)
+{
+	WRITE_NODE_TYPE("GROUPING");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_LOCATION_FIELD(location);
+	WRITE_INT_FIELD(agglevelsup);
+}
+
+static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -2270,6 +2310,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2914,6 +2955,15 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
+			case T_Grouping:
+				_outGrouping(str, obj);
+				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 69d9989..a58e099 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -215,6 +215,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -439,6 +440,53 @@ _readVar(void)
 	READ_DONE();
 }
 
+static Grouping *
+_readGrouping(void)
+{
+	READ_LOCALS(Grouping);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_LOCATION_FIELD(location);
+	READ_INT_FIELD(agglevelsup);
+
+	READ_DONE();
+}
+
+/*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
 /*
  * _readConst
  */
@@ -1320,6 +1368,12 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
+	else if (MATCH("GROUPING", 8))
+		return_value = _readGrouping();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index c81efe9..a16df6f 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1231,6 +1231,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2104,7 +2105,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 773f8a4..e8b6671 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -580,6 +580,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -648,10 +649,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -667,6 +668,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 4b641a2..1a47f0f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1015,6 +1015,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4265,6 +4266,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4294,10 +4296,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index 94ca92d..296b789 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1480cd..f53cc0a 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -22,6 +22,7 @@
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +38,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -77,7 +79,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -315,6 +318,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -531,7 +536,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1187,15 +1193,77 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		int		   *refmap = NULL;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
-		if (parse->groupClause)
-			preprocess_groupclause(root);
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			int			ref = 0;
+			List	   *remaining_sets = NIL;
+			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
+														  parse->sortClause,
+														  &remaining_sets);
+
+			/*
+			 * TODO - if the grouping set list can't be handled as one rollup...
+			 */
+
+			if (remaining_sets != NIL)
+				elog(ERROR, "not implemented yet");
+
+			parse->groupingSets = usable_sets;
+
+			if (parse->groupClause)
+				preprocess_groupclause(root, linitial(parse->groupingSets));
+
+			/*
+			 * Now that we've pinned down an order for the groupClause for this
+			 * list of grouping sets, remap the entries in the grouping sets
+			 * from sortgrouprefs to plain indices into the groupClause.
+			 */
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+
+			refmap = palloc0(sizeof(int) * (maxref + 1));
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				refmap[gc->tleSortGroupRef] = ++ref;
+			}
+
+			foreach(lc, usable_sets)
+			{
+				foreach(lc2, (List *) lfirst(lc))
+				{
+					Assert(refmap[lfirst_int(lc2)] > 0);
+					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+				}
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				preprocess_groupclause(root, NIL);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1257,6 +1325,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
+		if (refmap)
+			pfree(refmap);
+
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1267,6 +1338,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1312,7 +1384,23 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc;
+
+				dNumGroups = 0;
+
+				foreach(lc, parse->groupingSets)
+				{
+					dNumGroups += estimate_num_groups(root,
+													  groupExprs,
+													  path_rows,
+													  (List **) &(lfirst(lc)));
+				}
+			}
+			else
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1338,7 +1426,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
@@ -1360,7 +1448,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1443,13 +1531,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1591,12 +1690,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
 				/* Plain aggregate plan --- sort if needed */
 				AggStrategy aggstrategy;
@@ -1622,7 +1722,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				else
 				{
 					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
+					/* Result will have no sort order */
 					current_pathkeys = NIL;
 				}
 
@@ -1634,6 +1734,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												parse->groupingSets,
 												numGroups,
 												result_plan);
 			}
@@ -1666,27 +1767,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1849,7 +1989,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1890,6 +2030,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2508,6 +2649,7 @@ limit_needed(Query *parse)
 }
 
 
+
 /*
  * preprocess_groupclause - do preparatory work on GROUP BY clause
  *
@@ -2524,18 +2666,32 @@ limit_needed(Query *parse)
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we may need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2543,7 +2699,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2567,7 +2722,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2584,15 +2739,113 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * Extract a list of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass. The order of elements in each returned set is
+ * modified to ensure proper prefix relationships; the sets are returned in
+ * decreasing order of size. (The input must also be in descending order of
+ * size.)
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ *
+ * Sets that can't be accomodated within a rollup that includes the first
+ * (and therefore largest) grouping set in the input are added to the
+ * remainder list.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = linitial(groupingSets);
+	List	   *tmp_result = list_make1(previous);
+	List	   *result = NIL;
+
+	for_each_cell(lc, lnext(list_head(groupingSets)))
+	{
+		List   *candidate = lfirst(lc);
+		bool	ok = true;
+
+		foreach(lc2, candidate)
+		{
+			int ref = lfirst_int(lc2);
+			if (!list_member_int(previous, ref))
+			{
+				ok = false;
+				break;
+			}
+		}
+
+		if (ok)
+		{
+			tmp_result = lcons(candidate, tmp_result);
+			previous = candidate;
+		}
+		else
+			*remainder = lappend(*remainder, candidate);
+	}
+
+	/*
+	 * reorder the list elements so that shorter sets are strict
+	 * prefixes of longer ones, and if we ever have a choice, try
+	 * and follow the sortclause if there is one. (We're trying
+	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
+	 * gets implemented in one pass.)
+	 */
+
+	previous = NIL;
+
+	foreach(lc, tmp_result)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+	list_free(tmp_result);
+
+	return result;
 }
 
 /*
@@ -3040,7 +3293,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 4d717df..346c84d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -647,6 +655,9 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			set_group_vars(root, (Agg *) plan);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1119,6 +1130,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, Grouping))
+	{
+		Grouping   *g = (Grouping *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1246,6 +1282,67 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	int i;
+	Bitmapset *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	context.root = root;
+
+	for (i = 0; i < agg->numCols; ++i)
+		cols = bms_add_member(cols, agg->grpColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref) || IsA(node, Grouping))
+	{
+		/*
+		 * don't recurse into Aggrefs, since they see the values prior
+		 * to grouping.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..e0a2ca7 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given Grouping expression
+ * which is expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, Grouping *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the Grouping belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (Grouping *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,13 +1532,14 @@ simplify_EXISTS_query(Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR UPDATE/SHARE;
-	 * none of these seem likely in normal usage and their possible effects
-	 * are complex.
+	 * aggregates, grouping sets, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1813,6 +1856,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (Grouping *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 9cb1378..cb8aeb6 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..3c71d7f 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 19b5cf7..1152195 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4294,6 +4294,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 319e8b2..a7bbacf 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index b5c6a44..efed20a 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index d4f46b8..c6faf51 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..96ef36c 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -968,6 +968,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1014,7 +1015,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1474,7 +1475,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6f4d645..493c30f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -361,6 +361,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -425,7 +429,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -547,7 +551,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -562,7 +566,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -596,11 +600,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -9826,11 +9830,73 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11409,15 +11475,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  Grouping *g = makeNode(Grouping);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12167,6 +12251,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13066,6 +13157,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13142,12 +13234,14 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
+			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
@@ -13163,6 +13257,7 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
+			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index c984b7d..1c2aca1 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingExpr
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingExpr(ParseState *pstate, Grouping *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	Grouping   *result = makeNode(Grouping);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		Grouping *grp = (Grouping *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, Grouping))
+	{
+		int			agglevelsup = ((Grouping *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,57 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = llast(gsets);
+
+		if (gset_common)
+		{
+			foreach(l, gsets)
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +996,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1030,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1062,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1122,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1186,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/* we handled Grouping separately, no need to recheck at this level. */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1207,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1236,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1275,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1320,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/*
+		 * We only need to check Grouping nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_desc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, longest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 4931dca..5d02579 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,163 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	/* just in case of pathological input */
+	check_stack_depth();
 
-	foreach(gl, grouplist)
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1832,263 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2193,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..0bb8856 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -166,6 +167,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 										InvalidOid, InvalidOid, -1);
 			break;
 
+		case T_Grouping:
+			result = transformGroupingExpr(pstate, (Grouping *) expr);
+			break;
+
 		case T_TypeCast:
 			{
 				TypeCast   *tc = (TypeCast *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 328e0c6..1e48346 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1628,6 +1628,9 @@ FigureColnameInternal(Node *node, char **name)
 				}
 			}
 			break;
+		case T_Grouping:
+			*name = "grouping";
+			return 2;
 		case T_A_Indirection:
 			{
 				A_Indirection *ind = (A_Indirection *) node;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index cb65c05..0c93e1b 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2063,7 +2063,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index fb20314..02099a4 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,11 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up)
+			return true;
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +162,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up &&
+			((Grouping *) node)->location >= 0)
+		{
+			context->agg_location = ((Grouping *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -705,6 +719,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		Grouping	   *grp = (Grouping *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..5344736 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -360,9 +360,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -4535,7 +4537,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4560,19 +4562,35 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
 	}
 
@@ -4640,7 +4658,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4859,14 +4877,14 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
@@ -4891,6 +4909,66 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4910,7 +4988,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5010,7 +5088,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5559,10 +5637,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5584,10 +5662,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5607,10 +5685,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5650,10 +5728,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6684,6 +6762,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -7580,6 +7662,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			}
 			break;
 
+		case T_Grouping:
+			{
+				Grouping *gexpr = (Grouping *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_List:
 			{
 				char	   *sep;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index e932ccf..c769e83 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..ee1fe74 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -911,6 +913,16 @@ typedef struct MinMaxExprState
 } MinMaxExprState;
 
 /* ----------------
+ *		GroupingState node
+ * ----------------
+ */
+typedef struct GroupingState
+{
+	ExprState	xprstate;
+	List        *clauses;
+} GroupingState;
+
+/* ----------------
  *		XmlExprState node
  * ----------------
  */
@@ -1701,19 +1713,26 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontext;	/* econtexts for long-lived data */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index e108b85..bd3b2a5 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index a031b88..7998c95 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -115,6 +115,7 @@ typedef enum NodeTag
 	T_SortState,
 	T_GroupState,
 	T_AggState,
+	T_GroupingState,
 	T_WindowAggState,
 	T_UniqueState,
 	T_HashState,
@@ -171,6 +172,9 @@ typedef enum NodeTag
 	T_JoinExpr,
 	T_FromExpr,
 	T_IntoClause,
+	T_GroupedVar,
+	T_Grouping,
+	T_GroupingSet,
 
 	/*
 	 * TAGS FOR EXPRESSION STATE NODES (execnodes.h)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index d2c0b29..26ed5f4 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -134,6 +134,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of grouping sets if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index c545115..45eacda 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 3b9c683..077ae9f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -631,6 +631,7 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 6d9f3d9..4c03e40 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -159,6 +159,28 @@ typedef struct Var
 	int			location;		/* token location, or -1 if unknown */
 } Var;
 
+/* GroupedVar - expression node representing a grouping set variable.
+ * This is identical to Var node. It is a logical representation of
+ * a grouping set column and is also used during projection of rows
+ * in execution of a query having grouping sets.
+ */
+
+typedef Var GroupedVar;
+
+/*
+ * Grouping
+ */
+typedef struct Grouping
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	int			location;		/* token location */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+} Grouping;
+
 /*
  * Const
  */
@@ -1147,6 +1169,32 @@ typedef struct CurrentOfExpr
 	int			cursor_param;	/* refcursor parameter number, or 0 */
 } CurrentOfExpr;
 
+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
 /*--------------------
  * TargetEntry -
  *	   a target entry (used in query target lists)
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index dacbe9c..33b3beb 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -256,6 +256,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for Grouping fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 4504250..64f3aa3 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 1ebb635..c8b1c93 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 17888ad..e38b6bc 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -322,6 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -340,6 +343,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 3f55ec7..f0607fb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingExpr(ParseState *pstate, Grouping *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index e9e7cdc..58d88f0 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index 0f662ec..9d9c9b3 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..2d121c7
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,361 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index c0416f4..b15119e 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: privileges security_label collate matview lock replica_identity
+test: privileges security_label collate matview lock replica_identity groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 16a1905..5e64468 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..bc571ff
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,128 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- end
gsp2.patchtext/x-patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 479ae7e..aff1a92 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -960,6 +960,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index ad8a3d0..0ac2e70 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index beecd36..48567b9 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -326,6 +326,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -1119,6 +1120,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1126,7 +1129,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1137,22 +1139,45 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (!node->agg_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	if (!node->chain_done)
+	{
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
+	}
+
+	return NULL;
 }
 
 /*
@@ -1473,6 +1498,161 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that
+	 * might have been generated by previous rows, and if this is not
+	 * the first row, then grp_firsttuple has the representative input
+	 * row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontext[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontext[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1640,6 +1820,7 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
@@ -1672,9 +1853,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
 	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
 
 	if (node->groupingSets)
 	{
@@ -1734,6 +1920,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecInitResultTupleSlot(estate, &aggstate->ss.ps);
 	aggstate->hashslot = ExecInitExtraTupleSlot(estate);
 
+
 	/*
 	 * initialize child expressions
 	 *
@@ -1743,12 +1930,40 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		Assert(estate->agg_chain_head);
+
+		aggstate->chain_head = estate->agg_chain_head;
+		aggstate->chain_head->chain_depth++;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_head)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
 	 * initialize child nodes
@@ -1761,6 +1976,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_head)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1769,8 +1989,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -2225,6 +2472,9 @@ ExecEndAgg(AggState *node)
 	for (i = 0; i < numGroupingSets; ++i)
 		ReScanExprContext(node->aggcontext[i]);
 
+	if (node->chain_tuplestore && !node->chain_head)
+		tuplestore_end(node->chain_tuplestore);
+
 	/*
 	 * We don't actually free any ExprContexts here (see comment in
 	 * ExecFreeExprContext), just unlinking the output one from the plan node
@@ -2339,11 +2589,54 @@ ExecReScanAgg(AggState *node)
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed. (We assume that either all children
+	 * in the chain are protected or none are; since all Sort nodes in the
+	 * chain should have the same flags. If this changes, it would probably be
+	 * necessary to add a signalling param to force child rescan.)
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_head)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan == node->chain_depth)
+				tuplestore_clear(node->chain_tuplestore);
+			else if (node->chain_rescan == 0)
+				tuplestore_rescan(node->chain_tuplestore);
+			else
+				elog(ERROR, "chained aggregate rescan depth error");
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
+
+
+
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 8ce6411..612d611 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -772,6 +772,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_head);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 6e4efb4..279d8b9 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -632,6 +632,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_BOOL_FIELD(chain_head);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 1a47f0f..96ea58f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1016,6 +1016,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 groupColIdx,
 								 groupOperators,
 								 NIL,
+								 false,
 								 numGroups,
 								 subplan);
 	}
@@ -4266,7 +4267,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
-		 List *groupingSets,
+		 List *groupingSets, bool chain_head,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4276,6 +4277,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_head = chain_head;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4320,8 +4322,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_head);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f53cc0a..2fca072 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -67,6 +67,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -1180,11 +1181,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1194,7 +1190,14 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
 		int			maxref = 0;
-		int		   *refmap = NULL;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
@@ -1205,33 +1208,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		if (parse->groupingSets)
 			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
 
-		if (parse->groupingSets)
+		if (parse->groupClause)
 		{
 			ListCell   *lc;
-			ListCell   *lc2;
-			int			ref = 0;
-			List	   *remaining_sets = NIL;
-			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
-														  parse->sortClause,
-														  &remaining_sets);
-
-			/*
-			 * TODO - if the grouping set list can't be handled as one rollup...
-			 */
-
-			if (remaining_sets != NIL)
-				elog(ERROR, "not implemented yet");
-
-			parse->groupingSets = usable_sets;
-
-			if (parse->groupClause)
-				preprocess_groupclause(root, linitial(parse->groupingSets));
-
-			/*
-			 * Now that we've pinned down an order for the groupClause for this
-			 * list of grouping sets, remap the entries in the grouping sets
-			 * from sortgrouprefs to plain indices into the groupClause.
-			 */
 
 			foreach(lc, parse->groupClause)
 			{
@@ -1239,29 +1218,61 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				if (gc->tleSortGroupRef > maxref)
 					maxref = gc->tleSortGroupRef;
 			}
+		}
 
-			refmap = palloc0(sizeof(int) * (maxref + 1));
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			List	   *sets = parse->groupingSets;
 
-			foreach(lc, parse->groupClause)
+			do
 			{
-				SortGroupClause *gc = lfirst(lc);
-				refmap[gc->tleSortGroupRef] = ++ref;
-			}
+				List   *remaining_sets = NIL;
+				List   *usable_sets = extract_rollup_sets(sets,
+														  parse->sortClause,
+														  &remaining_sets);
+				List   *groupclause = preprocess_groupclause(root, linitial(usable_sets));
+				int		ref = 0;
+				int	   *refmap;
 
-			foreach(lc, usable_sets)
-			{
-				foreach(lc2, (List *) lfirst(lc))
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
 				{
-					Assert(refmap[lfirst_int(lc2)] > 0);
-					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
 				}
+
+				foreach(lc, usable_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(usable_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+
+				sets = remaining_sets;
 			}
+			while (sets);
 		}
 		else
 		{
 			/* Preprocess GROUP BY clause, if any */
 			if (parse->groupClause)
-				preprocess_groupclause(root, NIL);
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
 		}
 
 		numGroupCols = list_length(parse->groupClause);
@@ -1325,9 +1336,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
-		if (refmap)
-			pfree(refmap);
-
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1350,6 +1358,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1376,6 +1385,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
@@ -1411,6 +1421,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1434,6 +1447,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			 * set to 1).
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1614,7 +1629,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1689,8 +1704,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
 												NIL,
+												false,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
@@ -1698,45 +1714,94 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			}
 			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				bool		is_chained = false;
+
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
 
-				if (parse->groupClause)
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						/* need to remap groupColIdx */
+
+						if (gsets)
+						{
+							Assert(refmap);
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+							if (!is_chained)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													is_chained && (aggstrategy != AGG_CHAINED),
+													numGroups,
+													result_plan);
+
+					is_chained = true;
+
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
 				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will have no sort order */
-					current_pathkeys = NIL;
-				}
-
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												parse->groupingSets,
-												numGroups,
-												result_plan);
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -2031,6 +2096,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
 											NIL,
+											false,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2864,11 +2930,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 346c84d..2be5f29 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -655,8 +655,16 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
-			set_upper_references(root, plan, rtoffset);
-			set_group_vars(root, (Agg *) plan);
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
 			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
@@ -1288,21 +1296,30 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
  *    Modify any Var references in the target list of a non-trivial
  *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
  *    which will conditionally replace them with nulls at runtime.
+ *    Also fill in the cols list of any GROUPING() node.
  */
 static void
 set_group_vars(PlannerInfo *root, Agg *agg)
 {
 	set_group_vars_context context;
-	int i;
-	Bitmapset *cols = NULL;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
 
 	if (!agg->groupingSets)
 		return;
 
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
 	context.root = root;
 
-	for (i = 0; i < agg->numCols; ++i)
-		cols = bms_add_member(cols, agg->grpColIdx[i]);
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
 
 	context.groupedcols = cols;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index e0a2ca7..e5befe3 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -2091,7 +2092,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2142,7 +2143,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2351,7 +2352,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2367,7 +2369,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2383,7 +2386,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2399,7 +2403,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2415,7 +2420,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2482,8 +2488,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_head)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2500,7 +2528,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2509,7 +2538,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2520,7 +2550,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 3c71d7f..ce35226 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -774,6 +774,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
 								 NIL,
+								 false,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ee1fe74..cbc7b0c 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -409,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -1729,6 +1734,7 @@ typedef struct AggState
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
 	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
 	int			projected_set;	/* The last projected grouping set */
 	int			current_set;	/* The current grouping set being evaluated */
 	Bitmapset **grouped_cols;   /* column groupings for rollup */
@@ -1742,6 +1748,10 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 077ae9f..d558ff8 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -620,6 +620,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -627,6 +628,7 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	bool		chain_head;
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 64f3aa3..20b7493 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -59,6 +59,7 @@ extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
 		 List *groupingSets,
+		 bool chain_head,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 2d121c7..d426018 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -281,6 +281,29 @@ select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (valu
 (3 rows)
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  a | b 
 ---+---
@@ -288,6 +311,99 @@ select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  2 | 3
 (2 rows)
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1));
+ grouping 
+----------
+        0
+        0
+(2 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+(12 rows)
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
@@ -358,4 +474,87 @@ group by rollup(ten);
      |    
 (11 rows)
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |     |  1000
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |     |  1000
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four)) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,,1000)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,,1000)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)"}
+(2 rows)
+
 -- end
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index bc571ff..5404cb6 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -108,8 +108,22 @@ select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2))
 select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1));
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b);
+
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 
@@ -125,4 +139,8 @@ having exists (select 1 from onek b where sum(distinct a.four) = b.four);
 select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
 group by rollup(ten);
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four)) s1) from (values (1),(2)) v(a);
+
 -- end
gsp-doc.patchtext/x-patchDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index daa56e9..aab5055 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -11989,7 +11989,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13036,6 +13038,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 9bf3136..1ff920f 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1141,6 +1141,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index b69b634..c9bc2bb 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -619,23 +628,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group
     (whereas without <literal>GROUP BY</literal>, an aggregate
gsp-contrib.patchtext/x-patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 1065512..acacb61 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube FROM unpackaged" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube" FROM unpackaged" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
gsp-u.patchtext/x-patchDownload
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 493c30f..b53dd9c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -662,6 +662,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -672,7 +677,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -9876,6 +9881,12 @@ empty_grouping_set:
 				}
 		;
 
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
 rollup_clause:
 			ROLLUP '(' expr_list ')'
 				{
@@ -12997,6 +13008,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13143,6 +13155,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13234,7 +13247,6 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
-			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
@@ -13257,7 +13269,6 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
-			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 5344736..e170964 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -4888,12 +4888,13 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4902,8 +4903,27 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index e38b6bc..5ea1067 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,7 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
-PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -324,7 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
-PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
#51Tomas Vondra
tv@fuzzy.cz
In reply to: Andrew Gierth (#50)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On 31.8.2014 22:52, Andrew Gierth wrote:

Recut patches:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

(the contrib patch is not necessary if the -u patch is used; the
contrib/pg_stat_statements fixes are in the phase1 patch)

Hi,

I looked at the patch today.

The good news is it seems to apply cleanly on HEAD (with some small
offsets, but no conflicts). The code generally seems OK to me, although
the patch is quite massive. I've also did a considerable amount of
testing and I've been unable to cause failures.

I have significant doubts about the whole design, though. Especially the
decision not to use HashAggregate, and the whole chaining idea. I
haven't noticed any discussion about this (at least in this thread), and
the chaining idea was not mentioned until 21/8, so I'd appreciate some
reasoning behind this choice.

I assume the "no HashAggregate" decision was done because of fear of
underestimates, and the related OOM issues. I don't see how this is
different from the general HashAggregate, though. Or is there another
reason for this?

Now, the chaining only makes this worse, because it effectively forces a
separate sort of the whole table for each grouping set.

We're doing a lot of analytics on large tables, where large means tens
of GBs and hundreds of millions of rows. What we do now at the moment is
basically the usual ROLAP approach - create a cube with aggregated data,
which is usually much smaller than the source table, and then compute
the rollups for the interesting slices in a second step.

I was hoping that maybe we could eventually replace this with the GROUP
BY CUBE functionality provided by this patch, but these design decisions
make it pretty much impossible. I believe most other users processing
non-trivial amounts of data (pretty much everyone with just a few
million rows) will be in similar position :-(

What I envisioned when considering hacking on this a few months back,
was extending the aggregate API with "merge state" function, doing the
aggregation just like today and merging the groups (for each cell) at
the end. Yeah, we don't have this infrastructure, but maybe it'd be a
better way than the current chaining approach. And it was repeatedly
mentioned as necessary for parallel aggregation (and even mentioned in
the memory-bounded hashagg batching discussion). I'm ready to spend some
time on this, if it makes the grouping sets useful for us.

regards
Tomas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#52Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tomas Vondra (#51)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

Tomas> I have significant doubts about the whole design,
Tomas> though. Especially the decision not to use HashAggregate,

There is no "decision not to use HashAggregate". There is simply no
support for HashAggregate yet.

Having it be able to work with GroupAggregate is essential, because
there are always cases where HashAggregate is simply not permitted
(e.g. when using distinct or sorted aggs; or unhashable types; or with
the current code, when the estimated memory usage exceeds work_mem).
HashAggregate may be a performance improvement, but it's something
that can be added afterwards rather than an essential part of the
feature.

Tomas> Now, the chaining only makes this worse, because it
Tomas> effectively forces a separate sort of the whole table for each
Tomas> grouping set.

It's not one sort per grouping set, it's the minimal number of sorts
needed to express the result as a union of ROLLUP clauses. The planner
code will (I believe) always find the smallest number of sorts needed.

Each aggregate node can process any number of grouping sets as long as
they represent a single rollup list (and therefore share a single sort
order).

Yes, this is slower than using one hashagg. But it solves the general
problem in a way that does not interfere with future optimization.

(HashAggregate can be added to the current implementation by first
adding executor support for hashagg with multiple grouping sets, then
in the planner, extracting as many hashable grouping sets as possible
from the list before looking for rollup lists. The chained aggregate
code can work just fine with a HashAggregate as the chain head.

We have not actually tackled this, since I'm not going to waste any
time adding optimizations before the basic idea is accepted.)

Tomas> What I envisioned when considering hacking on this a few
Tomas> months back, was extending the aggregate API with "merge
Tomas> state" function,

That's not really on the cards for arbitrary non-trivial aggregate
functions.

Yes, it can be done for simple ones, and if you want to use that as a
basis for adding optimizations that's fine. But a solution that ONLY
works in simple cases isn't sufficient, IMO.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#53Tomas Vondra
tv@fuzzy.cz
In reply to: Andrew Gierth (#52)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On 6.9.2014 23:34, Andrew Gierth wrote:

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

Tomas> I have significant doubts about the whole design,
Tomas> though. Especially the decision not to use HashAggregate,

There is no "decision not to use HashAggregate". There is simply no
support for HashAggregate yet.

Having it be able to work with GroupAggregate is essential, because
there are always cases where HashAggregate is simply not permitted
(e.g. when using distinct or sorted aggs; or unhashable types; or with
the current code, when the estimated memory usage exceeds work_mem).
HashAggregate may be a performance improvement, but it's something
that can be added afterwards rather than an essential part of the
feature.

Ah, OK. I got confused by the "final patch" subject, and so the
possibility of additional optimization somehow didn't occur to me.

Tomas> Now, the chaining only makes this worse, because it
Tomas> effectively forces a separate sort of the whole table for each
Tomas> grouping set.

It's not one sort per grouping set, it's the minimal number of sorts
needed to express the result as a union of ROLLUP clauses. The planner
code will (I believe) always find the smallest number of sorts needed.

You're probably right. Although when doing GROUP BY CUBE (a,b,c,a) I get
one more ChainAggregate than with CUBE(a,b,c). and we seem to compute
all the aggregates twice. Not sure if we need to address this though,
because it's mostly user's fault.

Each aggregate node can process any number of grouping sets as long as
they represent a single rollup list (and therefore share a single sort
order).

Yes, this is slower than using one hashagg. But it solves the general
problem in a way that does not interfere with future optimization.

(HashAggregate can be added to the current implementation by first
adding executor support for hashagg with multiple grouping sets, then
in the planner, extracting as many hashable grouping sets as possible
from the list before looking for rollup lists. The chained aggregate
code can work just fine with a HashAggregate as the chain head.

We have not actually tackled this, since I'm not going to waste any
time adding optimizations before the basic idea is accepted.)

OK, understood.

Tomas> What I envisioned when considering hacking on this a few
Tomas> months back, was extending the aggregate API with "merge
Tomas> state" function,

That's not really on the cards for arbitrary non-trivial aggregate
functions.

Yes, it can be done for simple ones, and if you want to use that as a
basis for adding optimizations that's fine. But a solution that ONLY
works in simple cases isn't sufficient, IMO.

I believe it can be done for most aggregates, assuming you have access
to the internal state somehow (not just the). Adding it for in-core
aggregates would not be difficult, in most cases. But you're right we
don't have this now at all.

regards
Tomas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#54Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tomas Vondra (#53)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

It's not one sort per grouping set, it's the minimal number of
sorts needed to express the result as a union of ROLLUP
clauses. The planner code will (I believe) always find the
smallest number of sorts needed.

Tomas> You're probably right. Although when doing GROUP BY CUBE
Tomas> (a,b,c,a) I get one more ChainAggregate than with
Tomas> CUBE(a,b,c). and we seem to compute all the aggregates
Tomas> twice. Not sure if we need to address this though, because
Tomas> it's mostly user's fault.

Hm. Yeah, you're right that the number of sorts is not optimal there.
We can look into that.

As for computing it all twice, there's currently no attempt to
optimize multiple identical grouping sets into multiple projections of
a single grouping set result. CUBE(a,b,c,a) has twice as many grouping
sets as CUBE(a,b,c) does, even though all the extra ones are duplicates.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#55Tomas Vondra
tv@fuzzy.cz
In reply to: Andrew Gierth (#54)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On 7.9.2014 15:11, Andrew Gierth wrote:

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

It's not one sort per grouping set, it's the minimal number of
sorts needed to express the result as a union of ROLLUP
clauses. The planner code will (I believe) always find the
smallest number of sorts needed.

Tomas> You're probably right. Although when doing GROUP BY CUBE
Tomas> (a,b,c,a) I get one more ChainAggregate than with
Tomas> CUBE(a,b,c). and we seem to compute all the aggregates
Tomas> twice. Not sure if we need to address this though, because
Tomas> it's mostly user's fault.

Hm. Yeah, you're right that the number of sorts is not optimal
there. We can look into that.

I don't think it's very critical, though. I was worried about it because
of the sorts, but if that gets tackled in patches following this
commitfest it seems OK.

As for computing it all twice, there's currently no attempt to
optimize multiple identical grouping sets into multiple projections
of a single grouping set result. CUBE(a,b,c,a) has twice as many
grouping sets as CUBE(a,b,c) does, even though all the extra ones are
duplicates.

Shouldn't this be solved by eliminating the excessive ChainAggregate?
Although it probably changes GROUPING(...), so it's not just about
removing the duplicate column(s) from the CUBE.

Maybe preventing this completely (i.e. raising an ERROR with "duplicate
columns in CUBE/ROLLUP/... clauses") would be appropriate. Does the
standard says anything about this?

But arguably this is a minor issue, happening only when the user uses
the same column/expression twice. Hopefully the users don't do that too
often.

regards
Tomas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#56Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tomas Vondra (#55)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

As for computing it all twice, there's currently no attempt to
optimize multiple identical grouping sets into multiple
projections of a single grouping set result. CUBE(a,b,c,a) has
twice as many grouping sets as CUBE(a,b,c) does, even though all
the extra ones are duplicates.

Tomas> Shouldn't this be solved by eliminating the excessive
Tomas> ChainAggregate? Although it probably changes GROUPING(...),
Tomas> so it's not just about removing the duplicate column(s) from
Tomas> the CUBE.

Eliminating the excess ChainAggregate would not change the number of
grouping sets, only where they are computed.

Tomas> Maybe preventing this completely (i.e. raising an ERROR with
Tomas> "duplicate columns in CUBE/ROLLUP/... clauses") would be
Tomas> appropriate. Does the standard says anything about this?

The spec does not say anything explicitly about duplicates, so they
are allowed (and duplicate grouping _sets_ can't be removed, only
duplicate columns within a single GROUP BY clause after the grouping
sets have been eliminated by transformation). I have checked my
reading of the spec against oracle 11 and MSSQL using sqlfiddle.

The way the spec handles grouping sets is to define a sequence of
syntactic transforms that result in a query which is a UNION ALL of
ordinary GROUP BY queries. (We haven't tried to implement the
additional optional feature of GROUP BY DISTINCT.) Since it's UNION
ALL, any duplicates must be preserved, so a query with GROUPING SETS
((a),(a)) reduces to:

SELECT ... GROUP BY a UNION ALL SELECT ... GROUP BY a;

and therefore has duplicates of all its result rows.

I'm quite prepared to concede that I may have read the spec wrong
(wouldn't be the first time), but in this case I require any such
claim to be backed up by an example from some other db showing an
actual difference in behavior.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#57Tomas Vondra
tv@fuzzy.cz
In reply to: Andrew Gierth (#56)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

On 7.9.2014 18:52, Andrew Gierth wrote:

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

Tomas> Maybe preventing this completely (i.e. raising an ERROR with
Tomas> "duplicate columns in CUBE/ROLLUP/... clauses") would be
Tomas> appropriate. Does the standard says anything about this?

The spec does not say anything explicitly about duplicates, so they
are allowed (and duplicate grouping _sets_ can't be removed, only
duplicate columns within a single GROUP BY clause after the grouping
sets have been eliminated by transformation). I have checked my
reading of the spec against oracle 11 and MSSQL using sqlfiddle.

The way the spec handles grouping sets is to define a sequence of
syntactic transforms that result in a query which is a UNION ALL of
ordinary GROUP BY queries. (We haven't tried to implement the
additional optional feature of GROUP BY DISTINCT.) Since it's UNION
ALL, any duplicates must be preserved, so a query with GROUPING SETS
((a),(a)) reduces to:

SELECT ... GROUP BY a UNION ALL SELECT ... GROUP BY a;

and therefore has duplicates of all its result rows.

I'm quite prepared to concede that I may have read the spec wrong
(wouldn't be the first time), but in this case I require any such
claim to be backed up by an example from some other db showing an
actual difference in behavior.

I think you read the spec right. Apparently duplicate grouping sets are
allowed, and it's supposed to output that grouping set twice.

The section on ROLLUP/CUBE do not mention duplicates at all, it only
explains how to generate all the possible grouping sets, so if you have
duplicate columns there, you'll get duplicate sets (which is allowed).

If we can get rid of the excessive ChainAggregate, that's certainly
enough for now.

Optimizing it could be simple, though - you don't need to keep the
duplicate groups, you only need to keep a counter "how many times to
output this group". But the more I think about it, the more I think we
can ignore that. There are far more important pieces to implement, and
if you write bad SQL there's no help anyway.

regards
Tomas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#58Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Gierth (#8)
Re: WIP Patch for GROUPING SETS phase 1

On Thu, Aug 21, 2014 at 11:01 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

Heikki> Uh, that's ugly. The EXPLAIN out I mean; as an implementation
Heikki> detail chaining the nodes might be reasonable. But the above
Heikki> gets unreadable if you have more than a few grouping sets.

It's good for highlighting performance issues in EXPLAIN, too.

Perhaps so, but that doesn't take away from Heikki's point: it's still
ugly. I don't understand why the sorts can't all be nested under the
GroupAggregate nodes. We have a number of nodes already (e.g. Append)
that support an arbitrary number of children, and I don't see why we
can't do the same thing here.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#59Pavel Stehule
pavel.stehule@gmail.com
In reply to: Robert Haas (#58)
Re: WIP Patch for GROUPING SETS phase 1

2014-09-09 16:01 GMT+02:00 Robert Haas <robertmhaas@gmail.com>:

On Thu, Aug 21, 2014 at 11:01 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

Heikki> Uh, that's ugly. The EXPLAIN out I mean; as an implementation
Heikki> detail chaining the nodes might be reasonable. But the above
Heikki> gets unreadable if you have more than a few grouping sets.

It's good for highlighting performance issues in EXPLAIN, too.

Perhaps so, but that doesn't take away from Heikki's point: it's still
ugly. I don't understand why the sorts can't all be nested under the
GroupAggregate nodes. We have a number of nodes already (e.g. Append)
that support an arbitrary number of children, and I don't see why we
can't do the same thing here.

I don't think so showing sort and aggregation is bad idea. Both can have a
different performance impacts

Show quoted text

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#60Robert Haas
robertmhaas@gmail.com
In reply to: Pavel Stehule (#59)
Re: WIP Patch for GROUPING SETS phase 1

On Tue, Sep 9, 2014 at 11:19 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2014-09-09 16:01 GMT+02:00 Robert Haas <robertmhaas@gmail.com>:

On Thu, Aug 21, 2014 at 11:01 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

Heikki> Uh, that's ugly. The EXPLAIN out I mean; as an implementation
Heikki> detail chaining the nodes might be reasonable. But the above
Heikki> gets unreadable if you have more than a few grouping sets.

It's good for highlighting performance issues in EXPLAIN, too.

Perhaps so, but that doesn't take away from Heikki's point: it's still
ugly. I don't understand why the sorts can't all be nested under the
GroupAggregate nodes. We have a number of nodes already (e.g. Append)
that support an arbitrary number of children, and I don't see why we
can't do the same thing here.

I don't think so showing sort and aggregation is bad idea. Both can have a
different performance impacts

Sure, showing the sort and aggregation steps is fine. But I don't see
what advantage we get out of showing them like this:

Aggregate
-> Sort
-> ChainAggregate
-> Sort
-> ChainAggregate
-> Sort

When we could show them like this:

Aggregate
-> Sort
-> Sort
-> Sort

From both a display perspective and an implementation-complexity
perspective, it seems appealing to have the Aggregate node feed the
data to one sort after another, rather having it send the data down a
very deep pipe.

I might be missing something, of course. I don't want to presume that
I'm smarter than Andrew, because Andrew is pretty smart. :-) But it
seems odd to me.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#61Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Robert Haas (#60)
Re: WIP Patch for GROUPING SETS phase 1

"Robert" == Robert Haas <robertmhaas@gmail.com> writes:

Robert> Sure, showing the sort and aggregation steps is fine. But I
Robert> don't see what advantage we get out of showing them like
Robert> this:

Robert> Aggregate
Robert> -> Sort
Robert> -> ChainAggregate
Robert> -> Sort
Robert> -> ChainAggregate
Robert> -> Sort

The advantage is that this is how the plan tree is actually
structured.

Robert> When we could show them like this:

Robert> Aggregate
Robert> -> Sort
Robert> -> Sort
Robert> -> Sort

And we can't structure the plan tree like this, because then it
wouldn't be a _tree_ any more.

The Sort node expects to have a child node to fetch rows from, and it
expects all the usual plan tree mechanics (initialization, rescan,
etc.) to work on that child node. There's no way for the parent to
feed data to the child.

Robert> From both a display perspective and an
Robert> implementation-complexity perspective,

... says the person who has never tried implementing it.

Honestly, ChainAggregate is _trivial_ compared to trying to make the
GroupAggregate code deal with multiple inputs, or trying to make some
new sort of plumbing node to feed input to those sorts. (You'd think
that it should be possible to use the existing CTE mechanics to do it,
but noooo... the existing code is actively and ferociously hostile to
the idea of adding new CTEs from within the planner.)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#62Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Gierth (#61)
Re: WIP Patch for GROUPING SETS phase 1

On Tue, Sep 9, 2014 at 12:01 PM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

"Robert" == Robert Haas <robertmhaas@gmail.com> writes:

Robert> Sure, showing the sort and aggregation steps is fine. But I
Robert> don't see what advantage we get out of showing them like
Robert> this:

Robert> Aggregate
Robert> -> Sort
Robert> -> ChainAggregate
Robert> -> Sort
Robert> -> ChainAggregate
Robert> -> Sort

The advantage is that this is how the plan tree is actually
structured.

I do understand that. I am questioning (as I believe Heikki was also)
whether it's structured correctly. Nobody is arguing for displaying
the plan tree in a way that doesn't mirror it's actual structure, or
at least I am not.

The Sort node expects to have a child node to fetch rows from, and it
expects all the usual plan tree mechanics (initialization, rescan,
etc.) to work on that child node. There's no way for the parent to
feed data to the child.

OK, good point. So we do need something that can feed data from one
part of the plan tree to another, like a CTE does. I still think it
would be worth trying to see if there's a reasonable way to structure
the plan tree so that it's flatter.

Robert> From both a display perspective and an
Robert> implementation-complexity perspective,

... says the person who has never tried implementing it.

This comment to me reads rather sharply, and I don't feel that the
tone of my email is such as to merit a rebuke.

Honestly, ChainAggregate is _trivial_ compared to trying to make the
GroupAggregate code deal with multiple inputs, or trying to make some
new sort of plumbing node to feed input to those sorts. (You'd think
that it should be possible to use the existing CTE mechanics to do it,
but noooo... the existing code is actively and ferociously hostile to
the idea of adding new CTEs from within the planner.)

That's unfortunate.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#63Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#62)
Re: WIP Patch for GROUPING SETS phase 1

Robert Haas <robertmhaas@gmail.com> writes:

On Tue, Sep 9, 2014 at 12:01 PM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

Honestly, ChainAggregate is _trivial_ compared to trying to make the
GroupAggregate code deal with multiple inputs, or trying to make some
new sort of plumbing node to feed input to those sorts. (You'd think
that it should be possible to use the existing CTE mechanics to do it,
but noooo... the existing code is actively and ferociously hostile to
the idea of adding new CTEs from within the planner.)

That's unfortunate.

I'm less than convinced that it's true ... I've been meaning to find
time to review this patch, but it sounds like it's getting to the point
where I need to.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#64Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#63)
Re: WIP Patch for GROUPING SETS phase 1

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

Honestly, ChainAggregate is _trivial_ compared to trying to make the
GroupAggregate code deal with multiple inputs, or trying to make some
new sort of plumbing node to feed input to those sorts. (You'd think
that it should be possible to use the existing CTE mechanics to do it,
but noooo... the existing code is actively and ferociously hostile to
the idea of adding new CTEs from within the planner.)

That's unfortunate.

Tom> I'm less than convinced that it's true ...

Maybe you can figure out how, but I certainly didn't see a reasonable way.

I would also question one aspect of the desirability - using the CTE
mechanism has the downside of needing an extra tuplestore with the
full input data set in it, whereas the chain mechanism only has
aggregated data in its tuplestore which should be much smaller.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#65Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tomas Vondra (#57)
Re: Final Patch for GROUPING SETS - unrecognized node type: 347

"Tomas" == Tomas Vondra <tv@fuzzy.cz> writes:

Tomas> If we can get rid of the excessive ChainAggregate, that's
Tomas> certainly enough for now.

I found an algorithm that should provably give the minimal number of sorts
(I was afraid that problem would turn out to be NP-hard, but not so - it's
solvable in P by reducing it to a problem of maximal matching in bipartite
graphs).

Updated patch should be forthcoming in a day or two.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#66Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#65)
5 attachment(s)
Re: Final Patch for GROUPING SETS

Changes since previous post:

gsp2.patch: code to generate sort chains updated to guarantee minimal
number of sort steps

Recut patches:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

(the contrib patch is not necessary if the -u patch is used; the
contrib/pg_stat_statements fixes are in the phase1 patch)

--
Andrew (irc:RhodiumToad)

Attachments:

gsp1.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 799242b..9419656 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2655,6 +2656,28 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, rtfunc->funcexpr);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
+		case T_Grouping:
+			{
+				Grouping *grpnode = (Grouping *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
+		case T_IntList:
+			{
+				foreach(temp, (List *) node)
+				{
+					APP_JUMB(lfirst_int(temp));
+				}
+			}
+			break;
 		default:
 			/* Only a warning, since we can stumble along anyway */
 			elog(WARNING, "unrecognized node type: %d",
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 781a736..479ae7e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -78,6 +78,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1778,17 +1781,80 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	List	   *result = NIL;
+	bool		useprefix;
+	char	   *exprstr;
+	StringInfoData buf;
+	ListCell   *lc;
+	ListCell   *lc2;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	foreach(lc, gsets)
+	{
+		char *sep = "";
+
+		initStringInfo(&buf);
+		appendStringInfoString(&buf, "(");
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			appendStringInfoString(&buf, sep);
+			appendStringInfoString(&buf, exprstr);
+			sep = ", ";
+		}
+
+		appendStringInfoString(&buf, ")");
+
+		result = lappend(result, buf.data);
+	}
+
+	ExplainPropertyList(qlabel, result, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 7cfa63f..5fb61b0 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,8 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingExpr(GroupingState *gstate, ExprContext *econtext,
+								  bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -568,6 +572,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -645,8 +651,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -694,6 +716,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -2987,6 +3034,40 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingExpr
+ * Return a bitmask with a bit for each column.
+ * A bit is set if the column is not a part of grouping.
+ */
+
+static Datum
+ExecEvalGroupingExpr(GroupingState *gstate,
+					 ExprContext *econtext,
+					 bool *isNull,
+					 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int current_val= 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		current_val = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(current_val, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4385,6 +4466,32 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
+		case T_Grouping:
+			{
+				Grouping	   *grp_node = (Grouping *) node;
+				GroupingState  *grp_state = makeNode(GroupingState);
+				Agg			   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingExpr;
+			}
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index d5e1273..ad8a3d0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -653,7 +653,7 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	 * because those do not represent expressions to be evaluated within the
 	 * overall targetlist's econtext.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, Grouping))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 510d1c5..beecd36 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
 	 * rest.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +304,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReinitialize);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReinitialize)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         i = 0;
+
+	if (numReinitialize < 1)
+		numReinitialize = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (i = 0; i < numReinitialize; i++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstate[i])
+					tuplesort_end(peraggstate->sortstate[i]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstate[i] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
+		/* If ROLLUP is present, we need to iterate over all the groups
+		 * that are present with the current aggstate. If ROLLUP is not
+		 * present, we only have one groupstate associated with the
+		 * current aggstate.
 		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+
+		for (i = 0; i < numReinitialize; i++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (i * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +476,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +524,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +551,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         groupno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +601,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstate[groupno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstate[groupno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +626,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (groupno * numAggs)];
+
+				aggstate->current_set = groupno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +656,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +678,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +690,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstate[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +734,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +745,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +764,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstate[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,8 +818,8 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -832,7 +871,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +955,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, Grouping))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +986,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontext[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1097,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1131,7 +1171,13 @@ agg_retrieve_direct(AggState *aggstate)
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	int			   aggno;
+	bool           hasRollup = aggstate->numsets > 0;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentGroup = 0;
+	int            currentSize = 0;
+	int            numReset = 1;
+	int            i;
 
 	/*
 	 * get state info from node
@@ -1150,131 +1196,233 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
 		 * aggregate functions store working state in such contexts.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
+		 * any registered shutdown callbacks to be called.	That allows
 		 * aggregate functions to ensure they've cleaned up any non-memory
 		 * resources.
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontext[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+		}
 
-		/*
-		 * Initialize working state for a new input tuple group
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
+
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			currentSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			currentSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& currentSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									currentSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			++aggstate->projected_set;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(currentSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * we no longer care what group we just projected, the next projection
+			 * will always be the first (or only) grouping set (unless the input
+			 * proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch it
+			 * from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+				else
+				{
+					/* outer plan produced no tuples at all */
+					if (hasRollup)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasRollup)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentGroup = aggstate->projected_set;
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentGroup * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1292,6 +1440,9 @@ agg_retrieve_direct(AggState *aggstate)
 							   &aggvalues[aggno], &aggnulls[aggno]);
 		}
 
+		if (hasRollup)
+			econtext->grouped_cols = aggstate->grouped_cols[currentGroup];
+
 		/*
 		 * Check the qual (HAVING clause); if the group does not match, ignore
 		 * it and loop back to try to process another group.
@@ -1495,6 +1646,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int        numGroupingSets = 1;
+	int        currentsortno = 0;
+	int        i = 0;
+	int        j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1663,69 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
 
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontext = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
+
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts replaces the standalone memory context
+	 * formerly used to hold transition values.  We cheat a little by using
+	 * ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontext and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontext[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1645,7 +1831,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +1895,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstate = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstate[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2206,35 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			i = 0;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (i = 0; i < numGroupingSets; i++)
+		{
+			if (peraggstate->sortstate[i])
+				tuplesort_end(peraggstate->sortstate[i]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (i = 0; i < numGroupingSets; ++i)
+		ReScanExprContext(node->aggcontext[i]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2243,17 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         groupno;
+	int         i;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2279,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (groupno = 0; groupno < numGroupingSets; groupno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstate[groupno])
+			{
+				tuplesort_end(peraggstate->sortstate[groupno]);
+				peraggstate->sortstate[groupno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ReScanExprContext(node->aggcontext[i]);
+		MemoryContextDeleteChildren(node->aggcontext[i]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2096,21 +2315,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,7 +2333,9 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
@@ -2150,8 +2363,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2375,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2463,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index aa053a0..8ce6411 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -779,6 +779,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1065,6 +1066,59 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGrouping
+ */
+static Grouping *
+_copyGrouping(const Grouping *from)
+{
+	Grouping		   *newnode = makeNode(Grouping);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_LOCATION_FIELD(location);
+	COPY_SCALAR_FIELD(agglevelsup);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupingSet
+ */
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -2495,6 +2549,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4079,6 +4134,15 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
+		case T_Grouping:
+			retval = _copyGrouping(from);
+			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 719923e..0366088 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,47 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGrouping(const Grouping *a, const Grouping *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_LOCATION_FIELD(location);
+	COMPARE_SCALAR_FIELD(agglevelsup);
+
+	return true;
+}
+
+static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -864,6 +905,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2556,6 +2598,15 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
+		case T_Grouping:
+			retval = _equalGrouping(a, b);
+			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 5c09d2f..f878d1f 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index da59c58..e930cef 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 41e973b..6a63d1b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,12 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_Grouping:
+			type = INT4OID;
+			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -261,6 +267,10 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_Grouping:
+			return -1;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +744,12 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_Grouping:
+			coll = InvalidOid;
+			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -967,6 +983,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -1003,6 +1022,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_BoolExpr:
 			Assert(!OidIsValid(collation));		/* result is always boolean */
 			break;
+		case T_Grouping:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_SubLink:
 #ifdef USE_ASSERT_CHECKING
 			{
@@ -1182,6 +1204,15 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_Grouping:
+			loc = ((const Grouping *) expr)->location;
+			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1622,6 +1653,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1687,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_Grouping:
+			{
+				Grouping   *grouping = (Grouping *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2185,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2162,6 +2212,17 @@ expression_tree_mutator(Node *node,
 		case T_RangeTblRef:
 		case T_SortGroupClause:
 			return (Node *) copyObject(node);
+		case T_Grouping:
+			{
+				Grouping	   *grouping = (Grouping *) node;
+				Grouping	   *newnode;
+
+				FLATCOPY(newnode, grouping, Grouping);
+				MUTATE(newnode->args, grouping->args, List *);
+				/* assume no need to copy or mutate the refs list */
+				return (Node *) newnode;
+			}
+			break;
 		case T_WithCheckOption:
 			{
 				WithCheckOption *wco = (WithCheckOption *) node;
@@ -3209,6 +3270,8 @@ raw_expression_tree_walker(Node *node,
 			return walker(((WithClause *) node)->ctes, context);
 		case T_CommonTableExpr:
 			return walker(((CommonTableExpr *) node)->ctequery, context);
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) nodeTag(node));
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index e686a6c..6e4efb4 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -643,6 +643,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -912,6 +914,44 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGrouping(StringInfo str, const Grouping *node)
+{
+	WRITE_NODE_TYPE("GROUPING");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_LOCATION_FIELD(location);
+	WRITE_INT_FIELD(agglevelsup);
+}
+
+static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -2270,6 +2310,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2914,6 +2955,15 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
+			case T_Grouping:
+				_outGrouping(str, obj);
+				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 69d9989..a58e099 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -215,6 +215,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -439,6 +440,53 @@ _readVar(void)
 	READ_DONE();
 }
 
+static Grouping *
+_readGrouping(void)
+{
+	READ_LOCALS(Grouping);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_LOCATION_FIELD(location);
+	READ_INT_FIELD(agglevelsup);
+
+	READ_DONE();
+}
+
+/*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
 /*
  * _readConst
  */
@@ -1320,6 +1368,12 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
+	else if (MATCH("GROUPING", 8))
+		return_value = _readGrouping();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index c81efe9..a16df6f 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1231,6 +1231,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2104,7 +2105,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 773f8a4..e8b6671 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -580,6 +580,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -648,10 +649,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -667,6 +668,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 4b641a2..1a47f0f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1015,6 +1015,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4265,6 +4266,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4294,10 +4296,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index 94ca92d..296b789 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1480cd..f53cc0a 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -22,6 +22,7 @@
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +38,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -77,7 +79,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -315,6 +318,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -531,7 +536,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1187,15 +1193,77 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		int		   *refmap = NULL;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
-		if (parse->groupClause)
-			preprocess_groupclause(root);
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			int			ref = 0;
+			List	   *remaining_sets = NIL;
+			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
+														  parse->sortClause,
+														  &remaining_sets);
+
+			/*
+			 * TODO - if the grouping set list can't be handled as one rollup...
+			 */
+
+			if (remaining_sets != NIL)
+				elog(ERROR, "not implemented yet");
+
+			parse->groupingSets = usable_sets;
+
+			if (parse->groupClause)
+				preprocess_groupclause(root, linitial(parse->groupingSets));
+
+			/*
+			 * Now that we've pinned down an order for the groupClause for this
+			 * list of grouping sets, remap the entries in the grouping sets
+			 * from sortgrouprefs to plain indices into the groupClause.
+			 */
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+
+			refmap = palloc0(sizeof(int) * (maxref + 1));
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				refmap[gc->tleSortGroupRef] = ++ref;
+			}
+
+			foreach(lc, usable_sets)
+			{
+				foreach(lc2, (List *) lfirst(lc))
+				{
+					Assert(refmap[lfirst_int(lc2)] > 0);
+					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+				}
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				preprocess_groupclause(root, NIL);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1257,6 +1325,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
+		if (refmap)
+			pfree(refmap);
+
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1267,6 +1338,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1312,7 +1384,23 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc;
+
+				dNumGroups = 0;
+
+				foreach(lc, parse->groupingSets)
+				{
+					dNumGroups += estimate_num_groups(root,
+													  groupExprs,
+													  path_rows,
+													  (List **) &(lfirst(lc)));
+				}
+			}
+			else
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1338,7 +1426,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
@@ -1360,7 +1448,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1443,13 +1531,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1591,12 +1690,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
 				/* Plain aggregate plan --- sort if needed */
 				AggStrategy aggstrategy;
@@ -1622,7 +1722,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				else
 				{
 					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
+					/* Result will have no sort order */
 					current_pathkeys = NIL;
 				}
 
@@ -1634,6 +1734,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												parse->groupingSets,
 												numGroups,
 												result_plan);
 			}
@@ -1666,27 +1767,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1849,7 +1989,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1890,6 +2030,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2508,6 +2649,7 @@ limit_needed(Query *parse)
 }
 
 
+
 /*
  * preprocess_groupclause - do preparatory work on GROUP BY clause
  *
@@ -2524,18 +2666,32 @@ limit_needed(Query *parse)
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we may need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2543,7 +2699,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2567,7 +2722,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2584,15 +2739,113 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * Extract a list of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass. The order of elements in each returned set is
+ * modified to ensure proper prefix relationships; the sets are returned in
+ * decreasing order of size. (The input must also be in descending order of
+ * size.)
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ *
+ * Sets that can't be accomodated within a rollup that includes the first
+ * (and therefore largest) grouping set in the input are added to the
+ * remainder list.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = linitial(groupingSets);
+	List	   *tmp_result = list_make1(previous);
+	List	   *result = NIL;
+
+	for_each_cell(lc, lnext(list_head(groupingSets)))
+	{
+		List   *candidate = lfirst(lc);
+		bool	ok = true;
+
+		foreach(lc2, candidate)
+		{
+			int ref = lfirst_int(lc2);
+			if (!list_member_int(previous, ref))
+			{
+				ok = false;
+				break;
+			}
+		}
+
+		if (ok)
+		{
+			tmp_result = lcons(candidate, tmp_result);
+			previous = candidate;
+		}
+		else
+			*remainder = lappend(*remainder, candidate);
+	}
+
+	/*
+	 * reorder the list elements so that shorter sets are strict
+	 * prefixes of longer ones, and if we ever have a choice, try
+	 * and follow the sortclause if there is one. (We're trying
+	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
+	 * gets implemented in one pass.)
+	 */
+
+	previous = NIL;
+
+	foreach(lc, tmp_result)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+	list_free(tmp_result);
+
+	return result;
 }
 
 /*
@@ -3040,7 +3293,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 4d717df..346c84d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -647,6 +655,9 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			set_group_vars(root, (Agg *) plan);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1119,6 +1130,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, Grouping))
+	{
+		Grouping   *g = (Grouping *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1246,6 +1282,67 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	int i;
+	Bitmapset *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	context.root = root;
+
+	for (i = 0; i < agg->numCols; ++i)
+		cols = bms_add_member(cols, agg->grpColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref) || IsA(node, Grouping))
+	{
+		/*
+		 * don't recurse into Aggrefs, since they see the values prior
+		 * to grouping.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..e0a2ca7 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given Grouping expression
+ * which is expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, Grouping *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the Grouping belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (Grouping *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,13 +1532,14 @@ simplify_EXISTS_query(Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR UPDATE/SHARE;
-	 * none of these seem likely in normal usage and their possible effects
-	 * are complex.
+	 * aggregates, grouping sets, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1813,6 +1856,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (Grouping *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 9cb1378..cb8aeb6 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..3c71d7f 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 19b5cf7..1152195 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4294,6 +4294,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 319e8b2..a7bbacf 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index b5c6a44..efed20a 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index d4f46b8..c6faf51 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..96ef36c 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -968,6 +968,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1014,7 +1015,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1474,7 +1475,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b46dd7b..8f133b0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -361,6 +361,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -426,7 +430,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -548,7 +552,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -563,7 +567,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -597,11 +601,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -9841,11 +9845,73 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11424,15 +11490,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  Grouping *g = makeNode(Grouping);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12182,6 +12266,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13081,6 +13172,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13157,12 +13249,14 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
+			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
@@ -13178,6 +13272,7 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
+			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index c984b7d..1c2aca1 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingExpr
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingExpr(ParseState *pstate, Grouping *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	Grouping   *result = makeNode(Grouping);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		Grouping *grp = (Grouping *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, Grouping))
+	{
+		int			agglevelsup = ((Grouping *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,57 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = llast(gsets);
+
+		if (gset_common)
+		{
+			foreach(l, gsets)
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +996,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1030,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1062,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1122,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1186,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/* we handled Grouping separately, no need to recheck at this level. */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1207,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1236,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1275,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1320,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/*
+		 * We only need to check Grouping nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_desc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, longest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 4931dca..5d02579 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,163 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	/* just in case of pathological input */
+	check_stack_depth();
 
-	foreach(gl, grouplist)
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1832,263 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2193,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..0bb8856 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -166,6 +167,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 										InvalidOid, InvalidOid, -1);
 			break;
 
+		case T_Grouping:
+			result = transformGroupingExpr(pstate, (Grouping *) expr);
+			break;
+
 		case T_TypeCast:
 			{
 				TypeCast   *tc = (TypeCast *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 328e0c6..1e48346 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1628,6 +1628,9 @@ FigureColnameInternal(Node *node, char **name)
 				}
 			}
 			break;
+		case T_Grouping:
+			*name = "grouping";
+			return 2;
 		case T_A_Indirection:
 			{
 				A_Indirection *ind = (A_Indirection *) node;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index cb65c05..0c93e1b 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2063,7 +2063,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index fb20314..02099a4 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,11 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up)
+			return true;
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +162,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up &&
+			((Grouping *) node)->location >= 0)
+		{
+			context->agg_location = ((Grouping *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -705,6 +719,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		Grouping	   *grp = (Grouping *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..5344736 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -360,9 +360,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -4535,7 +4537,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4560,19 +4562,35 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
 	}
 
@@ -4640,7 +4658,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4859,14 +4877,14 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
@@ -4891,6 +4909,66 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4910,7 +4988,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5010,7 +5088,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5559,10 +5637,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5584,10 +5662,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5607,10 +5685,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5650,10 +5728,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6684,6 +6762,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -7580,6 +7662,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			}
 			break;
 
+		case T_Grouping:
+			{
+				Grouping *gexpr = (Grouping *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_List:
 			{
 				char	   *sep;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index e932ccf..c769e83 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..ee1fe74 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -911,6 +913,16 @@ typedef struct MinMaxExprState
 } MinMaxExprState;
 
 /* ----------------
+ *		GroupingState node
+ * ----------------
+ */
+typedef struct GroupingState
+{
+	ExprState	xprstate;
+	List        *clauses;
+} GroupingState;
+
+/* ----------------
  *		XmlExprState node
  * ----------------
  */
@@ -1701,19 +1713,26 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontext;	/* econtexts for long-lived data */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index e108b85..bd3b2a5 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index a031b88..7998c95 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -115,6 +115,7 @@ typedef enum NodeTag
 	T_SortState,
 	T_GroupState,
 	T_AggState,
+	T_GroupingState,
 	T_WindowAggState,
 	T_UniqueState,
 	T_HashState,
@@ -171,6 +172,9 @@ typedef enum NodeTag
 	T_JoinExpr,
 	T_FromExpr,
 	T_IntoClause,
+	T_GroupedVar,
+	T_Grouping,
+	T_GroupingSet,
 
 	/*
 	 * TAGS FOR EXPRESSION STATE NODES (execnodes.h)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index d2c0b29..26ed5f4 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -134,6 +134,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of grouping sets if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index c545115..45eacda 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 3b9c683..077ae9f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -631,6 +631,7 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 6d9f3d9..4c03e40 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -159,6 +159,28 @@ typedef struct Var
 	int			location;		/* token location, or -1 if unknown */
 } Var;
 
+/* GroupedVar - expression node representing a grouping set variable.
+ * This is identical to Var node. It is a logical representation of
+ * a grouping set column and is also used during projection of rows
+ * in execution of a query having grouping sets.
+ */
+
+typedef Var GroupedVar;
+
+/*
+ * Grouping
+ */
+typedef struct Grouping
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	int			location;		/* token location */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+} Grouping;
+
 /*
  * Const
  */
@@ -1147,6 +1169,32 @@ typedef struct CurrentOfExpr
 	int			cursor_param;	/* refcursor parameter number, or 0 */
 } CurrentOfExpr;
 
+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
 /*--------------------
  * TargetEntry -
  *	   a target entry (used in query target lists)
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index dacbe9c..33b3beb 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -256,6 +256,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for Grouping fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 4504250..64f3aa3 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 1ebb635..c8b1c93 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 17888ad..e38b6bc 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -322,6 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -340,6 +343,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 3f55ec7..f0607fb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingExpr(ParseState *pstate, Grouping *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index e9e7cdc..58d88f0 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index 0f662ec..9d9c9b3 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..2d121c7
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,361 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index c0416f4..b15119e 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: privileges security_label collate matview lock replica_identity
+test: privileges security_label collate matview lock replica_identity groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 16a1905..5e64468 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..bc571ff
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,128 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- end
gsp2.patchtext/x-patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 479ae7e..aff1a92 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -960,6 +960,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index ad8a3d0..0ac2e70 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index beecd36..48567b9 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -326,6 +326,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -1119,6 +1120,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1126,7 +1129,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1137,22 +1139,45 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (!node->agg_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	if (!node->chain_done)
+	{
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
+	}
+
+	return NULL;
 }
 
 /*
@@ -1473,6 +1498,161 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that
+	 * might have been generated by previous rows, and if this is not
+	 * the first row, then grp_firsttuple has the representative input
+	 * row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontext[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontext[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1640,6 +1820,7 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
@@ -1672,9 +1853,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
 	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
 
 	if (node->groupingSets)
 	{
@@ -1734,6 +1920,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecInitResultTupleSlot(estate, &aggstate->ss.ps);
 	aggstate->hashslot = ExecInitExtraTupleSlot(estate);
 
+
 	/*
 	 * initialize child expressions
 	 *
@@ -1743,12 +1930,40 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		Assert(estate->agg_chain_head);
+
+		aggstate->chain_head = estate->agg_chain_head;
+		aggstate->chain_head->chain_depth++;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_head)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
 	 * initialize child nodes
@@ -1761,6 +1976,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_head)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1769,8 +1989,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -2225,6 +2472,9 @@ ExecEndAgg(AggState *node)
 	for (i = 0; i < numGroupingSets; ++i)
 		ReScanExprContext(node->aggcontext[i]);
 
+	if (node->chain_tuplestore && !node->chain_head)
+		tuplestore_end(node->chain_tuplestore);
+
 	/*
 	 * We don't actually free any ExprContexts here (see comment in
 	 * ExecFreeExprContext), just unlinking the output one from the plan node
@@ -2339,11 +2589,54 @@ ExecReScanAgg(AggState *node)
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed. (We assume that either all children
+	 * in the chain are protected or none are; since all Sort nodes in the
+	 * chain should have the same flags. If this changes, it would probably be
+	 * necessary to add a signalling param to force child rescan.)
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_head)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan == node->chain_depth)
+				tuplestore_clear(node->chain_tuplestore);
+			else if (node->chain_rescan == 0)
+				tuplestore_rescan(node->chain_tuplestore);
+			else
+				elog(ERROR, "chained aggregate rescan depth error");
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
+
+
+
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 8ce6411..612d611 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -772,6 +772,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_head);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 6e4efb4..279d8b9 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -632,6 +632,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_BOOL_FIELD(chain_head);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 1a47f0f..96ea58f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1016,6 +1016,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 groupColIdx,
 								 groupOperators,
 								 NIL,
+								 false,
 								 numGroups,
 								 subplan);
 	}
@@ -4266,7 +4267,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
-		 List *groupingSets,
+		 List *groupingSets, bool chain_head,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4276,6 +4277,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_head = chain_head;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4320,8 +4322,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_head);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f53cc0a..99cdea6 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,6 +16,7 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
@@ -67,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -80,7 +82,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
 static List *preprocess_groupclause(PlannerInfo *root, List *force);
-static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -1180,11 +1183,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1194,7 +1192,14 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
 		int			maxref = 0;
-		int		   *refmap = NULL;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
@@ -1205,33 +1210,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		if (parse->groupingSets)
 			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
 
-		if (parse->groupingSets)
+		if (parse->groupClause)
 		{
 			ListCell   *lc;
-			ListCell   *lc2;
-			int			ref = 0;
-			List	   *remaining_sets = NIL;
-			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
-														  parse->sortClause,
-														  &remaining_sets);
-
-			/*
-			 * TODO - if the grouping set list can't be handled as one rollup...
-			 */
-
-			if (remaining_sets != NIL)
-				elog(ERROR, "not implemented yet");
-
-			parse->groupingSets = usable_sets;
-
-			if (parse->groupClause)
-				preprocess_groupclause(root, linitial(parse->groupingSets));
-
-			/*
-			 * Now that we've pinned down an order for the groupClause for this
-			 * list of grouping sets, remap the entries in the grouping sets
-			 * from sortgrouprefs to plain indices into the groupClause.
-			 */
 
 			foreach(lc, parse->groupClause)
 			{
@@ -1239,29 +1220,59 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				if (gc->tleSortGroupRef > maxref)
 					maxref = gc->tleSortGroupRef;
 			}
+		}
 
-			refmap = palloc0(sizeof(int) * (maxref + 1));
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
 
-			foreach(lc, parse->groupClause)
+			foreach(lc_set, sets)
 			{
-				SortGroupClause *gc = lfirst(lc);
-				refmap[gc->tleSortGroupRef] = ++ref;
-			}
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
 
-			foreach(lc, usable_sets)
-			{
-				foreach(lc2, (List *) lfirst(lc))
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
 				{
-					Assert(refmap[lfirst_int(lc2)] > 0);
-					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
 				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
 			}
 		}
 		else
 		{
 			/* Preprocess GROUP BY clause, if any */
 			if (parse->groupClause)
-				preprocess_groupclause(root, NIL);
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
 		}
 
 		numGroupCols = list_length(parse->groupClause);
@@ -1325,9 +1336,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
-		if (refmap)
-			pfree(refmap);
-
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1350,6 +1358,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1376,6 +1385,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
@@ -1411,6 +1421,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1434,6 +1447,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			 * set to 1).
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1614,7 +1629,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1689,8 +1704,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
 												NIL,
+												false,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
@@ -1698,45 +1714,94 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			}
 			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				bool		is_chained = false;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						/* need to remap groupColIdx */
+
+						if (gsets)
+						{
+							Assert(refmap);
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+							if (!is_chained)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
+					{
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will have no sort order */
-					current_pathkeys = NIL;
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													is_chained && (aggstrategy != AGG_CHAINED),
+													numGroups,
+													result_plan);
+
+					is_chained = true;
+
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
 				}
-
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												parse->groupingSets,
-												numGroups,
-												result_plan);
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -2031,6 +2096,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
 											NIL,
+											false,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2752,64 +2818,394 @@ preprocess_groupclause(PlannerInfo *root, List *force)
 
 
 /*
- * Extract a list of grouping sets that can be implemented using a single
- * rollup-type aggregate pass. The order of elements in each returned set is
- * modified to ensure proper prefix relationships; the sets are returned in
- * decreasing order of size. (The input must also be in descending order of
- * size.)
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
  *
- * If we're passed in a sortclause, we follow its order of columns to the
- * extent possible, to minimize the chance that we add unnecessary sorts.
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
  *
- * Sets that can't be accomodated within a rollup that includes the first
- * (and therefore largest) grouping set in the input are added to the
- * remainder list.
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
  */
 
 static List *
-extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+extract_rollup_sets(List *groupingSets)
 {
-	ListCell   *lc;
-	ListCell   *lc2;
-	List	   *previous = linitial(groupingSets);
-	List	   *tmp_result = list_make1(previous);
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
 	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
 
-	for_each_cell(lc, lnext(list_head(groupingSets)))
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
 	{
-		List   *candidate = lfirst(lc);
-		bool	ok = true;
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
 
 		foreach(lc2, candidate)
 		{
-			int ref = lfirst_int(lc2);
-			if (!list_member_int(previous, ref))
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
 			{
-				ok = false;
-				break;
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
 			}
 		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
 
-		if (ok)
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
 		{
-			tmp_result = lcons(candidate, tmp_result);
-			previous = candidate;
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
 		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
 		else
-			*remainder = lappend(*remainder, candidate);
+			chains[i] = ++num_chains;
 	}
 
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
 	/*
-	 * reorder the list elements so that shorter sets are strict
-	 * prefixes of longer ones, and if we ever have a choice, try
-	 * and follow the sortclause if there is one. (We're trying
-	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
-	 * gets implemented in one pass.)
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
 	 */
 
-	previous = NIL;
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
 
-	foreach(lc, tmp_result)
+	foreach(lc, groupingsets)
 	{
 		List   *candidate = lfirst(lc);
 		List   *new_elems = list_difference_int(candidate, previous);
@@ -2827,6 +3223,7 @@ extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
 				}
 				else
 				{
+					/* diverged from the sortclause; give up on it */
 					sortclause = NIL;
 					break;
 				}
@@ -2843,7 +3240,6 @@ extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
 	}
 
 	list_free(previous);
-	list_free(tmp_result);
 
 	return result;
 }
@@ -2864,11 +3260,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 346c84d..2be5f29 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -655,8 +655,16 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
-			set_upper_references(root, plan, rtoffset);
-			set_group_vars(root, (Agg *) plan);
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
 			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
@@ -1288,21 +1296,30 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
  *    Modify any Var references in the target list of a non-trivial
  *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
  *    which will conditionally replace them with nulls at runtime.
+ *    Also fill in the cols list of any GROUPING() node.
  */
 static void
 set_group_vars(PlannerInfo *root, Agg *agg)
 {
 	set_group_vars_context context;
-	int i;
-	Bitmapset *cols = NULL;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
 
 	if (!agg->groupingSets)
 		return;
 
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
 	context.root = root;
 
-	for (i = 0; i < agg->numCols; ++i)
-		cols = bms_add_member(cols, agg->grpColIdx[i]);
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
 
 	context.groupedcols = cols;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index e0a2ca7..e5befe3 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -2091,7 +2092,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2142,7 +2143,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2351,7 +2352,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2367,7 +2369,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2383,7 +2386,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2399,7 +2403,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2415,7 +2420,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2482,8 +2488,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_head)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2500,7 +2528,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2509,7 +2538,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2520,7 +2550,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 3c71d7f..ce35226 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -774,6 +774,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
 								 NIL,
+								 false,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 1c2aca1..d8e35c8 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -965,11 +965,11 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		 * The intersection will often be empty, so help things along by
 		 * seeding the intersect with the smallest set.
 		 */
-		gset_common = llast(gsets);
+		gset_common = linitial(gsets);
 
 		if (gset_common)
 		{
-			foreach(l, gsets)
+			for_each_cell(l, lnext(list_head(gsets)))
 			{
 				gset_common = list_intersection_int(gset_common, lfirst(l));
 				if (!gset_common)
@@ -1610,16 +1610,16 @@ expand_groupingset_node(GroupingSet *gs)
 }
 
 static int
-cmp_list_len_desc(const void *a, const void *b)
+cmp_list_len_asc(const void *a, const void *b)
 {
 	int la = list_length(*(List*const*)a);
 	int lb = list_length(*(List*const*)b);
-	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
 }
 
 /*
  * Expand a groupingSets clause to a flat list of grouping sets.
- * The returned list is sorted by length, longest sets first.
+ * The returned list is sorted by length, shortest sets first.
  *
  * This is mainly for the planner, but we use it here too to do
  * some consistency checks.
@@ -1695,7 +1695,7 @@ expand_grouping_sets(List *groupingSets, int limit)
 			*ptr++ = lfirst(lc);
 		}
 
-		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
 
 		result = NIL;
 		ptr = buf;
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ee1fe74..cbc7b0c 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -409,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -1729,6 +1734,7 @@ typedef struct AggState
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
 	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
 	int			projected_set;	/* The last projected grouping set */
 	int			current_set;	/* The current grouping set being evaluated */
 	Bitmapset **grouped_cols;   /* column groupings for rollup */
@@ -1742,6 +1748,10 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 077ae9f..d558ff8 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -620,6 +620,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -627,6 +628,7 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	bool		chain_head;
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 64f3aa3..20b7493 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -59,6 +59,7 @@ extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
 		 List *groupingSets,
+		 bool chain_head,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 2d121c7..e5d6c78 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -281,6 +281,29 @@ select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (valu
 (3 rows)
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  a | b 
 ---+---
@@ -288,6 +311,101 @@ select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  2 | 3
 (2 rows)
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
@@ -358,4 +476,87 @@ group by rollup(ten);
      |    
 (11 rows)
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
 -- end
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index bc571ff..5f32c4a 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -108,8 +108,22 @@ select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2))
 select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 
@@ -125,4 +139,8 @@ having exists (select 1 from onek b where sum(distinct a.four) = b.four);
 select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
 group by rollup(ten);
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
 -- end
gsp-doc.patchtext/x-patchDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7195df8..655587e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12006,7 +12006,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13052,6 +13054,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 9bf3136..1ff920f 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1141,6 +1141,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 940d1aa..7d10dbe 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -619,23 +628,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group
     (whereas without <literal>GROUP BY</literal>, an aggregate
gsp-contrib.patchtext/x-patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 1065512..acacb61 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube FROM unpackaged" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube" FROM unpackaged" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
gsp-u.patchtext/x-patchDownload
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 8f133b0..adf789e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -663,6 +663,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -673,7 +678,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -9891,6 +9896,12 @@ empty_grouping_set:
 				}
 		;
 
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
 rollup_clause:
 			ROLLUP '(' expr_list ')'
 				{
@@ -13012,6 +13023,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13158,6 +13170,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13249,7 +13262,6 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
-			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
@@ -13272,7 +13284,6 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
-			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 5344736..e170964 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -4888,12 +4888,13 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4902,8 +4903,27 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index e38b6bc..5ea1067 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,7 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
-PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -324,7 +324,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
-PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
#67Marti Raudsepp
marti@juffo.org
In reply to: Andrew Gierth (#66)
Re: Final Patch for GROUPING SETS

On Fri, Sep 12, 2014 at 9:41 PM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)

I gave these a try by converting my current CTE-based queries into
CUBEs and it works as expected; query time is cut in half and lines of
code is 1/4 of original. Thanks!

I only have a few trivial observations; if I'm getting too nitpicky
let me know. :)

----
Since you were asking for feedback on the EXPLAIN output on IRC, I'd
weigh in and say that having the groups on separate lines would be
significantly more readable. It took me a while to understand what's
going on in my queries due to longer table and column names and
wrapping; The comma separators between groups are hard to distinguish.
If that can be made to work with the EXPLAIN printer without too much
trouble.

So instead of:
GroupAggregate
Output: four, ten, hundred, count(*)
Grouping Sets: (onek.four, onek.ten, onek.hundred), (onek.four,
onek.ten), (onek.four), ()

Perhaps print:
Grouping Sets: (onek.four, onek.ten, onek.hundred)
(onek.four, onek.ten)
(onek.four)
()

Or maybe:
Grouping Set: (onek.four, onek.ten, onek.hundred)
Grouping Set: (onek.four, onek.ten)
Grouping Set: (onek.four)
Grouping Set: ()

Both seem to work with the explain.depesz.com parser, although the 1st
won't be aligned as nicely.

----
Do you think it would be reasonable to normalize single-set grouping
sets into a normal GROUP BY? Such queries would be capable of using
HashAggregate, but the current code doesn't allow that. For example:

set enable_sort=off;
explain select two, count(*) from onek group by grouping sets (two);
Could be equivalent to:
explain select two, count(*) from onek group by two;

----
I'd expect GROUP BY () to be fully equivalent to having no GROUP BY
clause, but there's a difference in explain output. The former
displays "Grouping Sets: ()" which is odd, since none of the grouping
set keywords were used.

# explain select count(*) from onek group by ();
Aggregate (cost=77.78..77.79 rows=1 width=0)
Grouping Sets: ()
-> Index Only Scan using onek_stringu1 on onek (cost=0.28..75.28
rows=1000 width=0)

Regards,
Marti

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#68Josh Berkus
josh@agliodbs.com
In reply to: Atri Sharma (#1)
Re: Final Patch for GROUPING SETS

On 09/17/2014 03:02 PM, Marti Raudsepp wrote:

So instead of:
GroupAggregate
Output: four, ten, hundred, count(*)
Grouping Sets: (onek.four, onek.ten, onek.hundred), (onek.four,
onek.ten), (onek.four), ()

Perhaps print:
Grouping Sets: (onek.four, onek.ten, onek.hundred)
(onek.four, onek.ten)
(onek.four)
()

So:

Grouping Sets: [
[ onek.four, onek.ten, onek.hundred ],
[ onek.four, onek.ten ],
[ onek.four ],
[]
]

.. in JSON?

Seems to me that we need a better way to display the grand total
grouping set.

Or maybe:
Grouping Set: (onek.four, onek.ten, onek.hundred)
Grouping Set: (onek.four, onek.ten)
Grouping Set: (onek.four)
Grouping Set: ()

The latter won't work with JSON and YAML output.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#69Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Marti Raudsepp (#67)
Re: Final Patch for GROUPING SETS

"Marti" == Marti Raudsepp <marti@juffo.org> writes:

Marti> Since you were asking for feedback on the EXPLAIN output on
Marti> IRC, I'd weigh in and say that having the groups on separate
Marti> lines would be significantly more readable.

I revisited the explain output a bit and have come up with these
(surrounding material trimmed for clarity):

(text format)

GroupAggregate (cost=1122.39..1197.48 rows=9 width=8)
Group Key: two, four
Group Key: two
Group Key: ()
-> ...

(xml format)

<Plan>
<Node-Type>Aggregate</Node-Type>
<Strategy>Sorted</Strategy>
<Startup-Cost>1122.39</Startup-Cost>
<Total-Cost>1197.48</Total-Cost>
<Plan-Rows>9</Plan-Rows>
<Plan-Width>8</Plan-Width>
<Grouping-Sets>
<Group-Key>
<Item>two</Item>
<Item>four</Item>
</Group-Key>
<Group-Key>
<Item>two</Item>
</Group-Key>
<Group-Key>
</Group-Key>
</Grouping-Sets>
<Plans>...

(json format)

"Plan": {
"Node Type": "Aggregate",
"Strategy": "Sorted",
"Startup Cost": 1122.39,
"Total Cost": 1197.48,
"Plan Rows": 9,
"Plan Width": 8,
"Grouping Sets": [
["two", "four"],
["two"],
[]
],
"Plans": [...]

(yaml format)

- Plan:
Node Type: "Aggregate"
Strategy: "Sorted"
Startup Cost: 1122.39
Total Cost: 1197.48
Plan Rows: 9
Plan Width: 8
Grouping Sets:
- - "two"
- "four"
- - "two"
-
Plans: ...

Opinions? Any improvements?

I'm not entirely happy with what I had to do with the json and
(especially) the YAML output code in order to make this work. There
seemed no obvious way to generate nested unlabelled structures in
either using the existing Explain* functions, and for the YAML case
the best output structure to produce was entirely non-obvious (and
trying to read the YAML spec made my head explode).

Marti> Do you think it would be reasonable to normalize single-set
Marti> grouping sets into a normal GROUP BY?

It's certainly possible, though it would seem somewhat odd to write
queries that way. Either the parser or the planner could do that;
would you want the original syntax preserved in views, or wouldn't
that matter?

Marti> I'd expect GROUP BY () to be fully equivalent to having no
Marti> GROUP BY clause, but there's a difference in explain
Marti> output. The former displays "Grouping Sets: ()" which is odd,
Marti> since none of the grouping set keywords were used.

That's an implementation artifact, in the sense that we preserve the
fact that GROUP BY () was used by using an empty grouping set. Is it
a problem, really, that it shows up that way in explain?

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#70Marti Raudsepp
marti@juffo.org
In reply to: Andrew Gierth (#69)
Re: Final Patch for GROUPING SETS

On Fri, Sep 19, 2014 at 4:45 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

GroupAggregate (cost=1122.39..1197.48 rows=9 width=8)
Group Key: two, four
Group Key: two
Group Key: ()

"Grouping Sets": [
["two", "four"],
["two"],
[]

+1 looks good to me.

(yaml format)
Grouping Sets:
- - "two"
- "four"
- - "two"
-

Now this is weird. But is anyone actually using YAML output format, or
was it implemented simply "because we can"?

Marti> Do you think it would be reasonable to normalize single-set
Marti> grouping sets into a normal GROUP BY?
It's certainly possible, though it would seem somewhat odd to write
queries that way.

The reason I bring this up is that queries are frequently dynamically
generated by programs. Coders are unlikely to special-case SQL
generation when there's just a single grouping set. And that's the
power of relational databases: the optimization work is done in the
database pretty much transparently to the coder (when it works, that
is).

would you want the original syntax preserved in views

Doesn't matter IMO.

Marti> I'd expect GROUP BY () to be fully equivalent to having no
Marti> GROUP BY clause, but there's a difference in explain
Marti> output. The former displays "Grouping Sets: ()" which is odd,
Marti> since none of the grouping set keywords were used.
That's an implementation artifact, in the sense that we preserve the
fact that GROUP BY () was used by using an empty grouping set. Is it
a problem, really, that it shows up that way in explain?

No, not really a problem. :)

Regards,
Marti

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#71Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Marti Raudsepp (#70)
Re: Final Patch for GROUPING SETS

"Marti" == Marti Raudsepp <marti@juffo.org> writes:

(yaml format)
Grouping Sets:
- - "two"
- "four"
- - "two"
-

Marti> Now this is weird.

You're telling me. Also, feeding it to an online yaml-to-json
converter gives the result as [["two","four"],["two"],null] which is
not quite the same as the json version. An alternative would be:

Grouping Sets:
- - "two"
- "four"
- - "two"
- []

or

Grouping Sets:
-
- "two"
- "four"
-
- "two"
- []

though I haven't managed to get that second one to work yet.

Marti> But is anyone actually using YAML output format, or was it
Marti> implemented simply "because we can"?

Until someone decides to dike it out, I think we are obligated to make
it produce something resembling correct output.

Marti> The reason I bring this up is that queries are frequently
Marti> dynamically generated by programs.

Good point.

would you want the original syntax preserved in views

Marti> Doesn't matter IMO.

I think it's fairly consistent for the parser to do this, since we do
a number of other normalization steps there (removing excess nesting
and so on). This turns out to be quite trivial.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#72Andres Freund
andres@2ndquadrant.com
In reply to: Andrew Gierth (#71)
Re: Final Patch for GROUPING SETS

On 2014-09-19 16:35:52 +0100, Andrew Gierth wrote:

Marti> But is anyone actually using YAML output format, or was it
Marti> implemented simply "because we can"?

Until someone decides to dike it out, I think we are obligated to make
it produce something resembling correct output.

I vote for ripping it out. There really isn't any justification for it
and it broke more than once.

Greg: Did you actually ever end up using the yaml output?

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#73Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#71)
Re: Final Patch for GROUPING SETS

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

Andrew> You're telling me. Also, feeding it to an online yaml-to-json
Andrew> converter gives the result as [["two","four"],["two"],null]
Andrew> which is not quite the same as the json version. An
Andrew> alternative would be:

Oh, another YAML alternative would be:

Grouping Sets:
- ["two","four"]
- ["two"]
- []

Would that be better? (It's not consistent with other YAML outputs like
sort/group keys, but it's equally legal as far as I can tell and seems
more readable.)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#74Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#72)
Re: Final Patch for GROUPING SETS

On 19/09/14 17:52, Andres Freund wrote:

On 2014-09-19 16:35:52 +0100, Andrew Gierth wrote:

Marti> But is anyone actually using YAML output format, or was it
Marti> implemented simply "because we can"?

Until someone decides to dike it out, I think we are obligated to make
it produce something resembling correct output.

I vote for ripping it out. There really isn't any justification for it
and it broke more than once.

Even though I really like YAML I say +1, mainly because any YAML 1.2
parser should be able to parse JSON output without problem...

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#75Josh Berkus
josh@agliodbs.com
In reply to: Andrew Gierth (#54)
Re: Final Patch for GROUPING SETS

On 09/19/2014 08:52 AM, Andres Freund wrote:

Until someone decides to dike it out, I think we are obligated to make

it produce something resembling correct output.

I vote for ripping it out. There really isn't any justification for it
and it broke more than once.

(a) I personally use it all the time to produce human-readable output,
sometimes also working via markdown. It's easier to read than the
"standard format" or JSON, especially when combined with grep or other
selective filtering. Note that this use would not at all preclude
having the YAML output look "wierd" as long as it was readable.

(b) If we're going to discuss ripping out YAML format, please let's do
that as a *separate* patch and discussion, and not as a side effect of
Grouping Sets. Otherwise this will be one of those things where people
pitch a fit during beta because the people who care about YAML aren't
necessarily reading this thread.

On 09/19/2014 08:52 AM, Andrew Gierth wrote:> Oh, another YAML
alternative would be:

Grouping Sets:
- ["two","four"]
- ["two"]
- []

Would that be better? (It's not consistent with other YAML outputs like
sort/group keys, but it's equally legal as far as I can tell and seems
more readable.)

That works for me.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#76Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Josh Berkus (#75)
Re: Final Patch for GROUPING SETS

"Josh" == Josh Berkus <josh@agliodbs.com> writes:

Josh> (b) If we're going to discuss ripping out YAML format, please
Josh> let's do that as a *separate* patch and discussion,

+infinity

Grouping Sets:
- ["two","four"]
- ["two"]
- []

Would that be better? (It's not consistent with other YAML outputs
like sort/group keys, but it's equally legal as far as I can tell
and seems more readable.)

Josh> That works for me.

I prefer that one to any of the others I've come up with, so unless anyone
has a major objection, I'll go with it.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#77Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Andrew Gierth (#76)
Re: Final Patch for GROUPING SETS

There's been a lot of discussion and I haven't followed it in detail.
Andrew, there were some open questions, but have you gotten enough
feedback so that you know what to do next? I'm trying to get this
commitfest to an end, and this is still in "Needs Review" state...

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#78Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Heikki Linnakangas (#77)
Re: Final Patch for GROUPING SETS

"Heikki" == Heikki Linnakangas <hlinnakangas@vmware.com> writes:

Heikki> There's been a lot of discussion and I haven't followed it in
Heikki> detail. Andrew, there were some open questions, but have you
Heikki> gotten enough feedback so that you know what to do next?

I was holding off on posting a recut patch with the latest EXPLAIN
formatting changes (which are basically cosmetic) until it became
clear whether RLS was likely to be reverted or kept (we have a tiny
but irritating conflict with it, in the regression test schedule file
where we both add to the same list of tests).

Other than that there is nothing for Atri and me to do next but wait
on a proper review. The feedback and discussion has been almost all
about cosmetic details; the only actual issues found have been a
trivial omission from pg_stat_statements, and a slightly suboptimal
planning of sort steps, both long since fixed.

What we have not had:

- anything more than a superficial review

- any feedback over the acceptability of our chained-sorts approach
for doing aggregations with differing sort orders

- any decision about the question of reserved words and/or possibly
renaming contrib/cube (and what new name to use if so)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#79Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#78)
5 attachment(s)
Re: Final Patch for GROUPING SETS

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

Andrew> I was holding off on posting a recut patch with the latest
Andrew> EXPLAIN formatting changes (which are basically cosmetic)
Andrew> until it became clear whether RLS was likely to be reverted
Andrew> or kept (we have a tiny but irritating conflict with it, in
Andrew> the regression test schedule file where we both add to the
Andrew> same list of tests).

And here is that recut patch set.

Changes since last posting (other than conflict removal):

- gsp1.patch: clearer EXPLAIN output as per discussion

Recut patches:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

(the contrib patch is not necessary if the -u patch is used; the
contrib/pg_stat_statements fixes are in the phase1 patch)

--
Andrew (irc:RhodiumToad)

Attachments:

gsp1.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 799242b..9419656 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2655,6 +2656,28 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, rtfunc->funcexpr);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
+		case T_Grouping:
+			{
+				Grouping *grpnode = (Grouping *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
+		case T_IntList:
+			{
+				foreach(temp, (List *) node)
+				{
+					APP_JUMB(lfirst_int(temp));
+				}
+			}
+			break;
 		default:
 			/* Only a warning, since we can stumble along anyway */
 			elog(WARNING, "unrecognized node type: %d",
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 781a736..0276f45 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -78,6 +78,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1778,17 +1781,76 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	bool		useprefix;
+	char	   *exprstr;
+	ListCell   *lc;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2335,6 +2397,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 7cfa63f..5fb61b0 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,8 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingExpr(GroupingState *gstate, ExprContext *econtext,
+								  bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -568,6 +572,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -645,8 +651,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -694,6 +716,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -2987,6 +3034,40 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingExpr
+ * Return a bitmask with a bit for each column.
+ * A bit is set if the column is not a part of grouping.
+ */
+
+static Datum
+ExecEvalGroupingExpr(GroupingState *gstate,
+					 ExprContext *econtext,
+					 bool *isNull,
+					 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int current_val= 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		current_val = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(current_val, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4385,6 +4466,32 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
+		case T_Grouping:
+			{
+				Grouping	   *grp_node = (Grouping *) node;
+				GroupingState  *grp_state = makeNode(GroupingState);
+				Agg			   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingExpr;
+			}
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index d5e1273..ad8a3d0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -653,7 +653,7 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	 * because those do not represent expressions to be evaluated within the
 	 * overall targetlist's econtext.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, Grouping))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 510d1c5..beecd36 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
 	 * rest.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +304,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReinitialize);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReinitialize)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         i = 0;
+
+	if (numReinitialize < 1)
+		numReinitialize = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (i = 0; i < numReinitialize; i++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstate[i])
+					tuplesort_end(peraggstate->sortstate[i]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstate[i] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
+		/* If ROLLUP is present, we need to iterate over all the groups
+		 * that are present with the current aggstate. If ROLLUP is not
+		 * present, we only have one groupstate associated with the
+		 * current aggstate.
 		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+
+		for (i = 0; i < numReinitialize; i++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (i * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +476,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +524,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontext[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +551,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         groupno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +601,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstate[groupno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstate[groupno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +626,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (groupno = 0; groupno < numGroupingSets; groupno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (groupno * numAggs)];
+
+				aggstate->current_set = groupno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +656,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +678,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +690,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstate[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +734,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +745,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +764,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstate[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstate[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,8 +818,8 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstate[aggstate->current_set]);
+	peraggstate->sortstate[aggstate->current_set] = NULL;
 }
 
 /*
@@ -832,7 +871,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +955,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, Grouping))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +986,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontext[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1097,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1131,7 +1171,13 @@ agg_retrieve_direct(AggState *aggstate)
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	int			   aggno;
+	bool           hasRollup = aggstate->numsets > 0;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentGroup = 0;
+	int            currentSize = 0;
+	int            numReset = 1;
+	int            i;
 
 	/*
 	 * get state info from node
@@ -1150,131 +1196,233 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
 		 * aggregate functions store working state in such contexts.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
+		 * any registered shutdown callbacks to be called.	That allows
 		 * aggregate functions to ensure they've cleaned up any non-memory
 		 * resources.
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontext[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontext[i]->ecxt_per_tuple_memory);
+		}
 
-		/*
-		 * Initialize working state for a new input tuple group
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
+
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			currentSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			currentSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& currentSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									currentSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			++aggstate->projected_set;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(currentSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * we no longer care what group we just projected, the next projection
+			 * will always be the first (or only) grouping set (unless the input
+			 * proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch it
+			 * from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+				else
+				{
+					/* outer plan produced no tuples at all */
+					if (hasRollup)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasRollup)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentGroup = aggstate->projected_set;
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentGroup * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1292,6 +1440,9 @@ agg_retrieve_direct(AggState *aggstate)
 							   &aggvalues[aggno], &aggnulls[aggno]);
 		}
 
+		if (hasRollup)
+			econtext->grouped_cols = aggstate->grouped_cols[currentGroup];
+
 		/*
 		 * Check the qual (HAVING clause); if the group does not match, ignore
 		 * it and loop back to try to process another group.
@@ -1495,6 +1646,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int        numGroupingSets = 1;
+	int        currentsortno = 0;
+	int        i = 0;
+	int        j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1663,69 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
 
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontext = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
+
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts replaces the standalone memory context
+	 * formerly used to hold transition values.  We cheat a little by using
+	 * ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontext and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontext[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1645,7 +1831,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +1895,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstate = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstate[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2206,35 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			i = 0;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (i = 0; i < numGroupingSets; i++)
+		{
+			if (peraggstate->sortstate[i])
+				tuplesort_end(peraggstate->sortstate[i]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (i = 0; i < numGroupingSets; ++i)
+		ReScanExprContext(node->aggcontext[i]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2243,17 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         groupno;
+	int         i;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2279,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (groupno = 0; groupno < numGroupingSets; groupno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstate[groupno])
+			{
+				tuplesort_end(peraggstate->sortstate[groupno]);
+				peraggstate->sortstate[groupno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ReScanExprContext(node->aggcontext[i]);
+		MemoryContextDeleteChildren(node->aggcontext[i]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2096,21 +2315,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,7 +2333,9 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
@@ -2150,8 +2363,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2375,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2463,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontext[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 225756c..cb648f8 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -779,6 +779,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1065,6 +1066,59 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGrouping
+ */
+static Grouping *
+_copyGrouping(const Grouping *from)
+{
+	Grouping		   *newnode = makeNode(Grouping);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_LOCATION_FIELD(location);
+	COPY_SCALAR_FIELD(agglevelsup);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
+ * _copyGroupingSet
+ */
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -2496,6 +2550,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4109,6 +4164,15 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
+		case T_Grouping:
+			retval = _copyGrouping(from);
+			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 905468e..d2cb13b 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,47 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGrouping(const Grouping *a, const Grouping *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_LOCATION_FIELD(location);
+	COMPARE_SCALAR_FIELD(agglevelsup);
+
+	return true;
+}
+
+static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -865,6 +906,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2582,6 +2624,15 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
+		case T_Grouping:
+			retval = _equalGrouping(a, b);
+			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 5c09d2f..f878d1f 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index da59c58..e930cef 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 41e973b..6a63d1b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,12 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_Grouping:
+			type = INT4OID;
+			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -261,6 +267,10 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_Grouping:
+			return -1;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +744,12 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_Grouping:
+			coll = InvalidOid;
+			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -967,6 +983,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -1003,6 +1022,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_BoolExpr:
 			Assert(!OidIsValid(collation));		/* result is always boolean */
 			break;
+		case T_Grouping:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_SubLink:
 #ifdef USE_ASSERT_CHECKING
 			{
@@ -1182,6 +1204,15 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_Grouping:
+			loc = ((const Grouping *) expr)->location;
+			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1622,6 +1653,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1687,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_Grouping:
+			{
+				Grouping   *grouping = (Grouping *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2185,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2162,6 +2212,17 @@ expression_tree_mutator(Node *node,
 		case T_RangeTblRef:
 		case T_SortGroupClause:
 			return (Node *) copyObject(node);
+		case T_Grouping:
+			{
+				Grouping	   *grouping = (Grouping *) node;
+				Grouping	   *newnode;
+
+				FLATCOPY(newnode, grouping, Grouping);
+				MUTATE(newnode->args, grouping->args, List *);
+				/* assume no need to copy or mutate the refs list */
+				return (Node *) newnode;
+			}
+			break;
 		case T_WithCheckOption:
 			{
 				WithCheckOption *wco = (WithCheckOption *) node;
@@ -3209,6 +3270,8 @@ raw_expression_tree_walker(Node *node,
 			return walker(((WithClause *) node)->ctes, context);
 		case T_CommonTableExpr:
 			return walker(((CommonTableExpr *) node)->ctequery, context);
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) nodeTag(node));
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 1ff78eb..a9cdb95 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -643,6 +643,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -912,6 +914,44 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGrouping(StringInfo str, const Grouping *node)
+{
+	WRITE_NODE_TYPE("GROUPING");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_LOCATION_FIELD(location);
+	WRITE_INT_FIELD(agglevelsup);
+}
+
+static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -2271,6 +2311,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2915,6 +2956,15 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
+			case T_Grouping:
+				_outGrouping(str, obj);
+				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index a324100..f8ed6ba 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -216,6 +216,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -440,6 +441,53 @@ _readVar(void)
 	READ_DONE();
 }
 
+static Grouping *
+_readGrouping(void)
+{
+	READ_LOCALS(Grouping);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_LOCATION_FIELD(location);
+	READ_INT_FIELD(agglevelsup);
+
+	READ_DONE();
+}
+
+/*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
 /*
  * _readConst
  */
@@ -1321,6 +1369,12 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
+	else if (MATCH("GROUPING", 8))
+		return_value = _readGrouping();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index c81efe9..a16df6f 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1231,6 +1231,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2104,7 +2105,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 773f8a4..e8b6671 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -580,6 +580,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -648,10 +649,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -667,6 +668,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 4b641a2..1a47f0f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1015,6 +1015,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4265,6 +4266,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4294,10 +4296,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index 94ca92d..296b789 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a509edd..2889a35 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -22,6 +22,7 @@
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +38,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -77,7 +79,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -317,6 +320,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -533,7 +538,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1189,15 +1195,77 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		int		   *refmap = NULL;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
-		if (parse->groupClause)
-			preprocess_groupclause(root);
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			int			ref = 0;
+			List	   *remaining_sets = NIL;
+			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
+														  parse->sortClause,
+														  &remaining_sets);
+
+			/*
+			 * TODO - if the grouping set list can't be handled as one rollup...
+			 */
+
+			if (remaining_sets != NIL)
+				elog(ERROR, "not implemented yet");
+
+			parse->groupingSets = usable_sets;
+
+			if (parse->groupClause)
+				preprocess_groupclause(root, linitial(parse->groupingSets));
+
+			/*
+			 * Now that we've pinned down an order for the groupClause for this
+			 * list of grouping sets, remap the entries in the grouping sets
+			 * from sortgrouprefs to plain indices into the groupClause.
+			 */
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+
+			refmap = palloc0(sizeof(int) * (maxref + 1));
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				refmap[gc->tleSortGroupRef] = ++ref;
+			}
+
+			foreach(lc, usable_sets)
+			{
+				foreach(lc2, (List *) lfirst(lc))
+				{
+					Assert(refmap[lfirst_int(lc2)] > 0);
+					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+				}
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				preprocess_groupclause(root, NIL);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1260,6 +1328,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
+		if (refmap)
+			pfree(refmap);
+
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1270,6 +1341,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1315,7 +1387,23 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc;
+
+				dNumGroups = 0;
+
+				foreach(lc, parse->groupingSets)
+				{
+					dNumGroups += estimate_num_groups(root,
+													  groupExprs,
+													  path_rows,
+													  (List **) &(lfirst(lc)));
+				}
+			}
+			else
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1341,7 +1429,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
@@ -1363,7 +1451,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1446,13 +1534,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1594,12 +1693,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
 				/* Plain aggregate plan --- sort if needed */
 				AggStrategy aggstrategy;
@@ -1625,7 +1725,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				else
 				{
 					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
+					/* Result will have no sort order */
 					current_pathkeys = NIL;
 				}
 
@@ -1637,6 +1737,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												numGroupCols,
 												groupColIdx,
 									extract_grouping_ops(parse->groupClause),
+												parse->groupingSets,
 												numGroups,
 												result_plan);
 			}
@@ -1669,27 +1770,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1852,7 +1992,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1893,6 +2033,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2511,6 +2652,7 @@ limit_needed(Query *parse)
 }
 
 
+
 /*
  * preprocess_groupclause - do preparatory work on GROUP BY clause
  *
@@ -2527,18 +2669,32 @@ limit_needed(Query *parse)
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we may need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2546,7 +2702,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2570,7 +2725,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2587,15 +2742,113 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * Extract a list of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass. The order of elements in each returned set is
+ * modified to ensure proper prefix relationships; the sets are returned in
+ * decreasing order of size. (The input must also be in descending order of
+ * size.)
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ *
+ * Sets that can't be accomodated within a rollup that includes the first
+ * (and therefore largest) grouping set in the input are added to the
+ * remainder list.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = linitial(groupingSets);
+	List	   *tmp_result = list_make1(previous);
+	List	   *result = NIL;
+
+	for_each_cell(lc, lnext(list_head(groupingSets)))
+	{
+		List   *candidate = lfirst(lc);
+		bool	ok = true;
+
+		foreach(lc2, candidate)
+		{
+			int ref = lfirst_int(lc2);
+			if (!list_member_int(previous, ref))
+			{
+				ok = false;
+				break;
+			}
+		}
+
+		if (ok)
+		{
+			tmp_result = lcons(candidate, tmp_result);
+			previous = candidate;
+		}
+		else
+			*remainder = lappend(*remainder, candidate);
+	}
+
+	/*
+	 * reorder the list elements so that shorter sets are strict
+	 * prefixes of longer ones, and if we ever have a choice, try
+	 * and follow the sortclause if there is one. (We're trying
+	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
+	 * gets implemented in one pass.)
+	 */
+
+	previous = NIL;
+
+	foreach(lc, tmp_result)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+	list_free(tmp_result);
+
+	return result;
 }
 
 /*
@@ -3043,7 +3296,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 9ddc8ad..b1016c6 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -647,6 +655,9 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			set_group_vars(root, (Agg *) plan);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1122,6 +1133,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, Grouping))
+	{
+		Grouping   *g = (Grouping *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1249,6 +1285,67 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	int i;
+	Bitmapset *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	context.root = root;
+
+	for (i = 0; i < agg->numCols; ++i)
+		cols = bms_add_member(cols, agg->grpColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref) || IsA(node, Grouping))
+	{
+		/*
+		 * don't recurse into Aggrefs, since they see the values prior
+		 * to grouping.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..e0a2ca7 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given Grouping expression
+ * which is expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, Grouping *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the Grouping belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (Grouping *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,13 +1532,14 @@ simplify_EXISTS_query(Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR UPDATE/SHARE;
-	 * none of these seem likely in normal usage and their possible effects
-	 * are complex.
+	 * aggregates, grouping sets, modifying CTEs, HAVING, LIMIT/OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1813,6 +1856,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (Grouping *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 9cb1378..cb8aeb6 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..3c71d7f 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 19b5cf7..1152195 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4294,6 +4294,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 319e8b2..a7bbacf 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index b5c6a44..efed20a 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index d4f46b8..c6faf51 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..96ef36c 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -968,6 +968,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1014,7 +1015,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1474,7 +1475,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 77d2f29..2aafa16 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -365,6 +365,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -430,7 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -552,7 +556,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -567,7 +571,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -601,11 +605,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -9985,11 +9989,73 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11568,15 +11634,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  Grouping *g = makeNode(Grouping);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12326,6 +12410,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13226,6 +13317,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13302,12 +13394,14 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
+			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
@@ -13323,6 +13417,7 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
+			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index c984b7d..02f849b 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingExpr
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingExpr(ParseState *pstate, Grouping *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	Grouping   *result = makeNode(Grouping);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		Grouping *grp = (Grouping *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, Grouping))
+	{
+		int			agglevelsup = ((Grouping *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,67 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = llast(gsets);
+
+		if (gset_common)
+		{
+			foreach(l, gsets)
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1006,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1040,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1072,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1132,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1196,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/* we handled Grouping separately, no need to recheck at this level. */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1217,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1246,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1285,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1330,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Grouping))
+	{
+		Grouping *grp = (Grouping *) node;
+
+		/*
+		 * We only need to check Grouping nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_desc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, longest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 4931dca..5d02579 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,163 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	/* just in case of pathological input */
+	check_stack_depth();
 
-	foreach(gl, grouplist)
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1832,263 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2193,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..0bb8856 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -166,6 +167,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 										InvalidOid, InvalidOid, -1);
 			break;
 
+		case T_Grouping:
+			result = transformGroupingExpr(pstate, (Grouping *) expr);
+			break;
+
 		case T_TypeCast:
 			{
 				TypeCast   *tc = (TypeCast *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 328e0c6..1e48346 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1628,6 +1628,9 @@ FigureColnameInternal(Node *node, char **name)
 				}
 			}
 			break;
+		case T_Grouping:
+			*name = "grouping";
+			return 2;
 		case T_A_Indirection:
 			{
 				A_Indirection *ind = (A_Indirection *) node;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index e640c1e..2d65976 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2107,7 +2107,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index fb20314..02099a4 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,11 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up)
+			return true;
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +162,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		if (((Grouping *) node)->agglevelsup == context->sublevels_up &&
+			((Grouping *) node)->location >= 0)
+		{
+			context->agg_location = ((Grouping *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -705,6 +719,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, Grouping))
+	{
+		Grouping	   *grp = (Grouping *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..5344736 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -360,9 +360,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -4535,7 +4537,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4560,19 +4562,35 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
 	}
 
@@ -4640,7 +4658,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4859,14 +4877,14 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
@@ -4891,6 +4909,66 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4910,7 +4988,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5010,7 +5088,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5559,10 +5637,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5584,10 +5662,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5607,10 +5685,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5650,10 +5728,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6684,6 +6762,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -7580,6 +7662,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			}
 			break;
 
+		case T_Grouping:
+			{
+				Grouping *gexpr = (Grouping *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_List:
 			{
 				char	   *sep;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index e932ccf..c769e83 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index 3488be3..aca22aa 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -81,6 +81,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..ee1fe74 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -911,6 +913,16 @@ typedef struct MinMaxExprState
 } MinMaxExprState;
 
 /* ----------------
+ *		GroupingState node
+ * ----------------
+ */
+typedef struct GroupingState
+{
+	ExprState	xprstate;
+	List        *clauses;
+} GroupingState;
+
+/* ----------------
  *		XmlExprState node
  * ----------------
  */
@@ -1701,19 +1713,26 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontext;	/* econtexts for long-lived data */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index e108b85..bd3b2a5 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 154d943..f1c41a1 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -115,6 +115,7 @@ typedef enum NodeTag
 	T_SortState,
 	T_GroupState,
 	T_AggState,
+	T_GroupingState,
 	T_WindowAggState,
 	T_UniqueState,
 	T_HashState,
@@ -171,6 +172,9 @@ typedef enum NodeTag
 	T_JoinExpr,
 	T_FromExpr,
 	T_IntoClause,
+	T_GroupedVar,
+	T_Grouping,
+	T_GroupingSet,
 
 	/*
 	 * TAGS FOR EXPRESSION STATE NODES (execnodes.h)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f3aa69e..79ecb8a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -135,6 +135,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of grouping sets if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index c545115..45eacda 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 1839494..28173ab 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -634,6 +634,7 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 6d9f3d9..4c03e40 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -159,6 +159,28 @@ typedef struct Var
 	int			location;		/* token location, or -1 if unknown */
 } Var;
 
+/* GroupedVar - expression node representing a grouping set variable.
+ * This is identical to Var node. It is a logical representation of
+ * a grouping set column and is also used during projection of rows
+ * in execution of a query having grouping sets.
+ */
+
+typedef Var GroupedVar;
+
+/*
+ * Grouping
+ */
+typedef struct Grouping
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	int			location;		/* token location */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+} Grouping;
+
 /*
  * Const
  */
@@ -1147,6 +1169,32 @@ typedef struct CurrentOfExpr
 	int			cursor_param;	/* refcursor parameter number, or 0 */
 } CurrentOfExpr;
 
+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
 /*--------------------
  * TargetEntry -
  *	   a target entry (used in query target lists)
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index f1a0504..73baa8c 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -259,6 +259,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for Grouping fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 3fdc2cb..c4c0004 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 1ebb635..c8b1c93 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 3c8c1b9..fe42789 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -323,6 +325,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -341,6 +344,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 3f55ec7..f0607fb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingExpr(ParseState *pstate, Grouping *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index e9e7cdc..58d88f0 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index 0f662ec..9d9c9b3 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..2d121c7
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,361 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index ab6c4e2..e8f1f46 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: privileges security_label collate matview lock replica_identity rowsecurity
+test: privileges security_label collate matview lock replica_identity rowsecurity groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 5ed2bf0..2cb09d5 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..bc571ff
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,128 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- end
gsp2.patchtext/x-patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0276f45..3ccf713 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -960,6 +960,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index ad8a3d0..0ac2e70 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index beecd36..48567b9 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -326,6 +326,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -1119,6 +1120,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1126,7 +1129,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1137,22 +1139,45 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (!node->agg_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	if (!node->chain_done)
+	{
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
+	}
+
+	return NULL;
 }
 
 /*
@@ -1473,6 +1498,161 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that
+	 * might have been generated by previous rows, and if this is not
+	 * the first row, then grp_firsttuple has the representative input
+	 * row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontext[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontext[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1640,6 +1820,7 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
@@ -1672,9 +1853,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
 	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
 
 	if (node->groupingSets)
 	{
@@ -1734,6 +1920,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecInitResultTupleSlot(estate, &aggstate->ss.ps);
 	aggstate->hashslot = ExecInitExtraTupleSlot(estate);
 
+
 	/*
 	 * initialize child expressions
 	 *
@@ -1743,12 +1930,40 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		Assert(estate->agg_chain_head);
+
+		aggstate->chain_head = estate->agg_chain_head;
+		aggstate->chain_head->chain_depth++;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) aggstate->chain_head->ss.ps.plan->qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_head)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
 	 * initialize child nodes
@@ -1761,6 +1976,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_head)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1769,8 +1989,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -2225,6 +2472,9 @@ ExecEndAgg(AggState *node)
 	for (i = 0; i < numGroupingSets; ++i)
 		ReScanExprContext(node->aggcontext[i]);
 
+	if (node->chain_tuplestore && !node->chain_head)
+		tuplestore_end(node->chain_tuplestore);
+
 	/*
 	 * We don't actually free any ExprContexts here (see comment in
 	 * ExecFreeExprContext), just unlinking the output one from the plan node
@@ -2339,11 +2589,54 @@ ExecReScanAgg(AggState *node)
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed. (We assume that either all children
+	 * in the chain are protected or none are; since all Sort nodes in the
+	 * chain should have the same flags. If this changes, it would probably be
+	 * necessary to add a signalling param to force child rescan.)
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_head)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan == node->chain_depth)
+				tuplestore_clear(node->chain_tuplestore);
+			else if (node->chain_rescan == 0)
+				tuplestore_rescan(node->chain_tuplestore);
+			else
+				elog(ERROR, "chained aggregate rescan depth error");
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
+
+
+
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index cb648f8..71c9554 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -772,6 +772,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_head);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index a9cdb95..6131fec 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -632,6 +632,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_BOOL_FIELD(chain_head);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 1a47f0f..96ea58f 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1016,6 +1016,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 groupColIdx,
 								 groupOperators,
 								 NIL,
+								 false,
 								 numGroups,
 								 subplan);
 	}
@@ -4266,7 +4267,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
-		 List *groupingSets,
+		 List *groupingSets, bool chain_head,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4276,6 +4277,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_head = chain_head;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4320,8 +4322,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_head);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 2889a35..8fed104 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,6 +16,7 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
@@ -67,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -80,7 +82,8 @@ static double preprocess_limit(PlannerInfo *root,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
 static List *preprocess_groupclause(PlannerInfo *root, List *force);
-static List *extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -1182,11 +1185,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1196,7 +1194,14 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
 		int			maxref = 0;
-		int		   *refmap = NULL;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
@@ -1207,33 +1212,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		if (parse->groupingSets)
 			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
 
-		if (parse->groupingSets)
+		if (parse->groupClause)
 		{
 			ListCell   *lc;
-			ListCell   *lc2;
-			int			ref = 0;
-			List	   *remaining_sets = NIL;
-			List	   *usable_sets = extract_rollup_sets(parse->groupingSets,
-														  parse->sortClause,
-														  &remaining_sets);
-
-			/*
-			 * TODO - if the grouping set list can't be handled as one rollup...
-			 */
-
-			if (remaining_sets != NIL)
-				elog(ERROR, "not implemented yet");
-
-			parse->groupingSets = usable_sets;
-
-			if (parse->groupClause)
-				preprocess_groupclause(root, linitial(parse->groupingSets));
-
-			/*
-			 * Now that we've pinned down an order for the groupClause for this
-			 * list of grouping sets, remap the entries in the grouping sets
-			 * from sortgrouprefs to plain indices into the groupClause.
-			 */
 
 			foreach(lc, parse->groupClause)
 			{
@@ -1241,29 +1222,59 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				if (gc->tleSortGroupRef > maxref)
 					maxref = gc->tleSortGroupRef;
 			}
+		}
 
-			refmap = palloc0(sizeof(int) * (maxref + 1));
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
 
-			foreach(lc, parse->groupClause)
+			foreach(lc_set, sets)
 			{
-				SortGroupClause *gc = lfirst(lc);
-				refmap[gc->tleSortGroupRef] = ++ref;
-			}
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
 
-			foreach(lc, usable_sets)
-			{
-				foreach(lc2, (List *) lfirst(lc))
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
 				{
-					Assert(refmap[lfirst_int(lc2)] > 0);
-					lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
 				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
 			}
 		}
 		else
 		{
 			/* Preprocess GROUP BY clause, if any */
 			if (parse->groupClause)
-				preprocess_groupclause(root, NIL);
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
 		}
 
 		numGroupCols = list_length(parse->groupClause);
@@ -1328,9 +1339,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			preprocess_minmax_aggregates(root, tlist);
 		}
 
-		if (refmap)
-			pfree(refmap);
-
 		/* Make tuple_fraction accessible to lower-level routines */
 		root->tuple_fraction = tuple_fraction;
 
@@ -1353,6 +1361,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1379,6 +1388,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
@@ -1414,6 +1424,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1437,6 +1450,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			 * set to 1).
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1617,7 +1632,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1692,8 +1707,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
 												NIL,
+												false,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
@@ -1701,45 +1717,94 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			}
 			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				bool		is_chained = false;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						/* need to remap groupColIdx */
+
+						if (gsets)
+						{
+							Assert(refmap);
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+							if (!is_chained)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
+					{
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will have no sort order */
-					current_pathkeys = NIL;
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													is_chained && (aggstrategy != AGG_CHAINED),
+													numGroups,
+													result_plan);
+
+					is_chained = true;
+
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
 				}
-
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												parse->groupingSets,
-												numGroups,
-												result_plan);
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -2034,6 +2099,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
 											NIL,
+											false,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2755,64 +2821,394 @@ preprocess_groupclause(PlannerInfo *root, List *force)
 
 
 /*
- * Extract a list of grouping sets that can be implemented using a single
- * rollup-type aggregate pass. The order of elements in each returned set is
- * modified to ensure proper prefix relationships; the sets are returned in
- * decreasing order of size. (The input must also be in descending order of
- * size.)
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
  *
- * If we're passed in a sortclause, we follow its order of columns to the
- * extent possible, to minimize the chance that we add unnecessary sorts.
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
  *
- * Sets that can't be accomodated within a rollup that includes the first
- * (and therefore largest) grouping set in the input are added to the
- * remainder list.
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
  */
 
 static List *
-extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
+extract_rollup_sets(List *groupingSets)
 {
-	ListCell   *lc;
-	ListCell   *lc2;
-	List	   *previous = linitial(groupingSets);
-	List	   *tmp_result = list_make1(previous);
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
 	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
 
-	for_each_cell(lc, lnext(list_head(groupingSets)))
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
 	{
-		List   *candidate = lfirst(lc);
-		bool	ok = true;
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
 
 		foreach(lc2, candidate)
 		{
-			int ref = lfirst_int(lc2);
-			if (!list_member_int(previous, ref))
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
 			{
-				ok = false;
-				break;
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
 			}
 		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
 
-		if (ok)
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
 		{
-			tmp_result = lcons(candidate, tmp_result);
-			previous = candidate;
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
 		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
 		else
-			*remainder = lappend(*remainder, candidate);
+			chains[i] = ++num_chains;
 	}
 
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
 	/*
-	 * reorder the list elements so that shorter sets are strict
-	 * prefixes of longer ones, and if we ever have a choice, try
-	 * and follow the sortclause if there is one. (We're trying
-	 * here to ensure that GROUPING SETS ((a,b),(b)) ORDER BY b,a
-	 * gets implemented in one pass.)
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
 	 */
 
-	previous = NIL;
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
 
-	foreach(lc, tmp_result)
+	foreach(lc, groupingsets)
 	{
 		List   *candidate = lfirst(lc);
 		List   *new_elems = list_difference_int(candidate, previous);
@@ -2830,6 +3226,7 @@ extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
 				}
 				else
 				{
+					/* diverged from the sortclause; give up on it */
 					sortclause = NIL;
 					break;
 				}
@@ -2846,7 +3243,6 @@ extract_rollup_sets(List *groupingSets, List *sortclause, List **remainder)
 	}
 
 	list_free(previous);
-	list_free(tmp_result);
 
 	return result;
 }
@@ -2867,11 +3263,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index b1016c6..2d91406 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -655,8 +655,16 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
-			set_upper_references(root, plan, rtoffset);
-			set_group_vars(root, (Agg *) plan);
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
 			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
@@ -1291,21 +1299,30 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
  *    Modify any Var references in the target list of a non-trivial
  *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
  *    which will conditionally replace them with nulls at runtime.
+ *    Also fill in the cols list of any GROUPING() node.
  */
 static void
 set_group_vars(PlannerInfo *root, Agg *agg)
 {
 	set_group_vars_context context;
-	int i;
-	Bitmapset *cols = NULL;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
 
 	if (!agg->groupingSets)
 		return;
 
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
 	context.root = root;
 
-	for (i = 0; i < agg->numCols; ++i)
-		cols = bms_add_member(cols, agg->grpColIdx[i]);
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
 
 	context.groupedcols = cols;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index e0a2ca7..e5befe3 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -2091,7 +2092,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2142,7 +2143,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2351,7 +2352,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2367,7 +2369,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2383,7 +2386,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2399,7 +2403,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2415,7 +2420,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2482,8 +2488,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_head)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2500,7 +2528,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2509,7 +2538,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2520,7 +2550,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 3c71d7f..ce35226 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -774,6 +774,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
 								 NIL,
+								 false,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 02f849b..86063d8 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -965,11 +965,11 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		 * The intersection will often be empty, so help things along by
 		 * seeding the intersect with the smallest set.
 		 */
-		gset_common = llast(gsets);
+		gset_common = linitial(gsets);
 
 		if (gset_common)
 		{
-			foreach(l, gsets)
+			for_each_cell(l, lnext(list_head(gsets)))
 			{
 				gset_common = list_intersection_int(gset_common, lfirst(l));
 				if (!gset_common)
@@ -1620,16 +1620,16 @@ expand_groupingset_node(GroupingSet *gs)
 }
 
 static int
-cmp_list_len_desc(const void *a, const void *b)
+cmp_list_len_asc(const void *a, const void *b)
 {
 	int la = list_length(*(List*const*)a);
 	int lb = list_length(*(List*const*)b);
-	return (la > lb) ? -1 : (la == lb) ? 0 : 1;
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
 }
 
 /*
  * Expand a groupingSets clause to a flat list of grouping sets.
- * The returned list is sorted by length, longest sets first.
+ * The returned list is sorted by length, shortest sets first.
  *
  * This is mainly for the planner, but we use it here too to do
  * some consistency checks.
@@ -1705,7 +1705,7 @@ expand_grouping_sets(List *groupingSets, int limit)
 			*ptr++ = lfirst(lc);
 		}
 
-		qsort(buf, result_len, sizeof(List*), cmp_list_len_desc);
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
 
 		result = NIL;
 		ptr = buf;
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ee1fe74..cbc7b0c 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -409,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -1729,6 +1734,7 @@ typedef struct AggState
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
 	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
 	int			projected_set;	/* The last projected grouping set */
 	int			current_set;	/* The current grouping set being evaluated */
 	Bitmapset **grouped_cols;   /* column groupings for rollup */
@@ -1742,6 +1748,10 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 28173ab..b006a30 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -623,6 +623,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -630,6 +631,7 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	bool		chain_head;
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index c4c0004..58d88bc 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -59,6 +59,7 @@ extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
 		 List *groupingSets,
+		 bool chain_head,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 2d121c7..e5d6c78 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -281,6 +281,29 @@ select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (valu
 (3 rows)
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  a | b 
 ---+---
@@ -288,6 +311,101 @@ select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
  2 | 3
 (2 rows)
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
@@ -358,4 +476,87 @@ group by rollup(ten);
      |    
 (11 rows)
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
 -- end
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index bc571ff..5f32c4a 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -108,8 +108,22 @@ select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2))
 select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
 
 -- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
 select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
 
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
 -- Agg level check. This query should error out.
 select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
 
@@ -125,4 +139,8 @@ having exists (select 1 from onek b where sum(distinct a.four) = b.four);
 select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
 group by rollup(ten);
 
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
 -- end
gsp-doc.patchtext/x-patchDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7195df8..655587e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12006,7 +12006,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13052,6 +13054,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 9bf3136..1ff920f 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1141,6 +1141,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 940d1aa..7d10dbe 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -619,23 +628,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group
     (whereas without <literal>GROUP BY</literal>, an aggregate
gsp-contrib.patchtext/x-patchDownload
diff --git a/contrib/cube/cube--1.0.sql b/contrib/cube/cube--1.0.sql
index 0307811..1b563cc 100644
--- a/contrib/cube/cube--1.0.sql
+++ b/contrib/cube/cube--1.0.sql
@@ -1,36 +1,36 @@
 /* contrib/cube/cube--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube"" to load this file. \quit
 
 -- Create the user-defined type for N-dimensional boxes
 
 CREATE FUNCTION cube_in(cstring)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[], float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[], float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8[]) RETURNS cube
+CREATE FUNCTION "cube"(float8[]) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_a_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_out(cube)
+CREATE FUNCTION cube_out("cube")
 RETURNS cstring
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE TYPE cube (
+CREATE TYPE "cube" (
 	INTERNALLENGTH = variable,
 	INPUT = cube_in,
 	OUTPUT = cube_out,
 	ALIGNMENT = double
 );
 
-COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
+COMMENT ON TYPE "cube" IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)''';
 
 --
 -- External C-functions for R-tree methods
@@ -38,89 +38,89 @@ COMMENT ON TYPE cube IS 'multi-dimensional cube ''(FLOAT-1, FLOAT-2, ..., FLOAT-
 
 -- Comparison methods
 
-CREATE FUNCTION cube_eq(cube, cube)
+CREATE FUNCTION cube_eq("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_eq(cube, cube) IS 'same as';
+COMMENT ON FUNCTION cube_eq("cube", "cube") IS 'same as';
 
-CREATE FUNCTION cube_ne(cube, cube)
+CREATE FUNCTION cube_ne("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ne(cube, cube) IS 'different';
+COMMENT ON FUNCTION cube_ne("cube", "cube") IS 'different';
 
-CREATE FUNCTION cube_lt(cube, cube)
+CREATE FUNCTION cube_lt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_lt(cube, cube) IS 'lower than';
+COMMENT ON FUNCTION cube_lt("cube", "cube") IS 'lower than';
 
-CREATE FUNCTION cube_gt(cube, cube)
+CREATE FUNCTION cube_gt("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_gt(cube, cube) IS 'greater than';
+COMMENT ON FUNCTION cube_gt("cube", "cube") IS 'greater than';
 
-CREATE FUNCTION cube_le(cube, cube)
+CREATE FUNCTION cube_le("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_le(cube, cube) IS 'lower than or equal to';
+COMMENT ON FUNCTION cube_le("cube", "cube") IS 'lower than or equal to';
 
-CREATE FUNCTION cube_ge(cube, cube)
+CREATE FUNCTION cube_ge("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_ge(cube, cube) IS 'greater than or equal to';
+COMMENT ON FUNCTION cube_ge("cube", "cube") IS 'greater than or equal to';
 
-CREATE FUNCTION cube_cmp(cube, cube)
+CREATE FUNCTION cube_cmp("cube", "cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_cmp(cube, cube) IS 'btree comparison function';
+COMMENT ON FUNCTION cube_cmp("cube", "cube") IS 'btree comparison function';
 
-CREATE FUNCTION cube_contains(cube, cube)
+CREATE FUNCTION cube_contains("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contains(cube, cube) IS 'contains';
+COMMENT ON FUNCTION cube_contains("cube", "cube") IS 'contains';
 
-CREATE FUNCTION cube_contained(cube, cube)
+CREATE FUNCTION cube_contained("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_contained(cube, cube) IS 'contained in';
+COMMENT ON FUNCTION cube_contained("cube", "cube") IS 'contained in';
 
-CREATE FUNCTION cube_overlap(cube, cube)
+CREATE FUNCTION cube_overlap("cube", "cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-COMMENT ON FUNCTION cube_overlap(cube, cube) IS 'overlaps';
+COMMENT ON FUNCTION cube_overlap("cube", "cube") IS 'overlaps';
 
 -- support routines for indexing
 
-CREATE FUNCTION cube_union(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_union("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_inter(cube, cube)
-RETURNS cube
+CREATE FUNCTION cube_inter("cube", "cube")
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_size(cube)
+CREATE FUNCTION cube_size("cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -128,62 +128,62 @@ LANGUAGE C IMMUTABLE STRICT;
 
 -- Misc N-dimensional functions
 
-CREATE FUNCTION cube_subset(cube, int4[])
-RETURNS cube
+CREATE FUNCTION cube_subset("cube", int4[])
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- proximity routines
 
-CREATE FUNCTION cube_distance(cube, cube)
+CREATE FUNCTION cube_distance("cube", "cube")
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 -- Extracting elements functions
 
-CREATE FUNCTION cube_dim(cube)
+CREATE FUNCTION cube_dim("cube")
 RETURNS int4
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ll_coord(cube, int4)
+CREATE FUNCTION cube_ll_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube_ur_coord(cube, int4)
+CREATE FUNCTION cube_ur_coord("cube", int4)
 RETURNS float8
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8) RETURNS cube
+CREATE FUNCTION "cube"(float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(float8, float8) RETURNS cube
+CREATE FUNCTION "cube"(float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION cube(cube, float8, float8) RETURNS cube
+CREATE FUNCTION "cube"("cube", float8, float8) RETURNS "cube"
 AS 'MODULE_PATHNAME', 'cube_c_f8_f8'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Test if cube is also a point
+-- Test if "cube" is also a point
 
-CREATE FUNCTION cube_is_point(cube)
+CREATE FUNCTION cube_is_point("cube")
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
--- Increasing the size of a cube by a radius in at least n dimensions
+-- Increasing the size of a "cube" by a radius in at least n dimensions
 
-CREATE FUNCTION cube_enlarge(cube, float8, int4)
-RETURNS cube
+CREATE FUNCTION cube_enlarge("cube", float8, int4)
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
@@ -192,76 +192,76 @@ LANGUAGE C IMMUTABLE STRICT;
 --
 
 CREATE OPERATOR < (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_lt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_lt,
 	COMMUTATOR = '>', NEGATOR = '>=',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR > (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_gt,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_gt,
 	COMMUTATOR = '<', NEGATOR = '<=',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR <= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_le,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_le,
 	COMMUTATOR = '>=', NEGATOR = '>',
 	RESTRICT = scalarltsel, JOIN = scalarltjoinsel
 );
 
 CREATE OPERATOR >= (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ge,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ge,
 	COMMUTATOR = '<=', NEGATOR = '<',
 	RESTRICT = scalargtsel, JOIN = scalargtjoinsel
 );
 
 CREATE OPERATOR && (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_overlap,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_overlap,
 	COMMUTATOR = '&&',
 	RESTRICT = areasel, JOIN = areajoinsel
 );
 
 CREATE OPERATOR = (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_eq,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_eq,
 	COMMUTATOR = '=', NEGATOR = '<>',
 	RESTRICT = eqsel, JOIN = eqjoinsel,
 	MERGES
 );
 
 CREATE OPERATOR <> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_ne,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_ne,
 	COMMUTATOR = '<>', NEGATOR = '=',
 	RESTRICT = neqsel, JOIN = neqjoinsel
 );
 
 CREATE OPERATOR @> (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '<@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR <@ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@>',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 -- these are obsolete/deprecated:
 CREATE OPERATOR @ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contains,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contains,
 	COMMUTATOR = '~',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 CREATE OPERATOR ~ (
-	LEFTARG = cube, RIGHTARG = cube, PROCEDURE = cube_contained,
+	LEFTARG = "cube", RIGHTARG = "cube", PROCEDURE = cube_contained,
 	COMMUTATOR = '@',
 	RESTRICT = contsel, JOIN = contjoinsel
 );
 
 
 -- define the GiST support methods
-CREATE FUNCTION g_cube_consistent(internal,cube,int,oid,internal)
+CREATE FUNCTION g_cube_consistent(internal,"cube",int,oid,internal)
 RETURNS bool
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -287,11 +287,11 @@ AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
 CREATE FUNCTION g_cube_union(internal, internal)
-RETURNS cube
+RETURNS "cube"
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
 
-CREATE FUNCTION g_cube_same(cube, cube, internal)
+CREATE FUNCTION g_cube_same("cube", "cube", internal)
 RETURNS internal
 AS 'MODULE_PATHNAME'
 LANGUAGE C IMMUTABLE STRICT;
@@ -300,26 +300,26 @@ LANGUAGE C IMMUTABLE STRICT;
 -- Create the operator classes for indexing
 
 CREATE OPERATOR CLASS cube_ops
-    DEFAULT FOR TYPE cube USING btree AS
+    DEFAULT FOR TYPE "cube" USING btree AS
         OPERATOR        1       < ,
         OPERATOR        2       <= ,
         OPERATOR        3       = ,
         OPERATOR        4       >= ,
         OPERATOR        5       > ,
-        FUNCTION        1       cube_cmp(cube, cube);
+        FUNCTION        1       cube_cmp("cube", "cube");
 
 CREATE OPERATOR CLASS gist_cube_ops
-    DEFAULT FOR TYPE cube USING gist AS
+    DEFAULT FOR TYPE "cube" USING gist AS
 	OPERATOR	3	&& ,
 	OPERATOR	6	= ,
 	OPERATOR	7	@> ,
 	OPERATOR	8	<@ ,
 	OPERATOR	13	@ ,
 	OPERATOR	14	~ ,
-	FUNCTION	1	g_cube_consistent (internal, cube, int, oid, internal),
+	FUNCTION	1	g_cube_consistent (internal, "cube", int, oid, internal),
 	FUNCTION	2	g_cube_union (internal, internal),
 	FUNCTION	3	g_cube_compress (internal),
 	FUNCTION	4	g_cube_decompress (internal),
 	FUNCTION	5	g_cube_penalty (internal, internal, internal),
 	FUNCTION	6	g_cube_picksplit (internal, internal),
-	FUNCTION	7	g_cube_same (cube, cube, internal);
+	FUNCTION	7	g_cube_same ("cube", "cube", internal);
diff --git a/contrib/cube/cube--unpackaged--1.0.sql b/contrib/cube/cube--unpackaged--1.0.sql
index 1065512..acacb61 100644
--- a/contrib/cube/cube--unpackaged--1.0.sql
+++ b/contrib/cube/cube--unpackaged--1.0.sql
@@ -1,56 +1,56 @@
-/* contrib/cube/cube--unpackaged--1.0.sql */
+/* contrib/"cube"/"cube"--unpackaged--1.0.sql */
 
 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use "CREATE EXTENSION cube FROM unpackaged" to load this file. \quit
+\echo Use "CREATE EXTENSION "cube" FROM unpackaged" to load this file. \quit
 
-ALTER EXTENSION cube ADD type cube;
-ALTER EXTENSION cube ADD function cube_in(cstring);
-ALTER EXTENSION cube ADD function cube(double precision[],double precision[]);
-ALTER EXTENSION cube ADD function cube(double precision[]);
-ALTER EXTENSION cube ADD function cube_out(cube);
-ALTER EXTENSION cube ADD function cube_eq(cube,cube);
-ALTER EXTENSION cube ADD function cube_ne(cube,cube);
-ALTER EXTENSION cube ADD function cube_lt(cube,cube);
-ALTER EXTENSION cube ADD function cube_gt(cube,cube);
-ALTER EXTENSION cube ADD function cube_le(cube,cube);
-ALTER EXTENSION cube ADD function cube_ge(cube,cube);
-ALTER EXTENSION cube ADD function cube_cmp(cube,cube);
-ALTER EXTENSION cube ADD function cube_contains(cube,cube);
-ALTER EXTENSION cube ADD function cube_contained(cube,cube);
-ALTER EXTENSION cube ADD function cube_overlap(cube,cube);
-ALTER EXTENSION cube ADD function cube_union(cube,cube);
-ALTER EXTENSION cube ADD function cube_inter(cube,cube);
-ALTER EXTENSION cube ADD function cube_size(cube);
-ALTER EXTENSION cube ADD function cube_subset(cube,integer[]);
-ALTER EXTENSION cube ADD function cube_distance(cube,cube);
-ALTER EXTENSION cube ADD function cube_dim(cube);
-ALTER EXTENSION cube ADD function cube_ll_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube_ur_coord(cube,integer);
-ALTER EXTENSION cube ADD function cube(double precision);
-ALTER EXTENSION cube ADD function cube(double precision,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision);
-ALTER EXTENSION cube ADD function cube(cube,double precision,double precision);
-ALTER EXTENSION cube ADD function cube_is_point(cube);
-ALTER EXTENSION cube ADD function cube_enlarge(cube,double precision,integer);
-ALTER EXTENSION cube ADD operator >(cube,cube);
-ALTER EXTENSION cube ADD operator >=(cube,cube);
-ALTER EXTENSION cube ADD operator <(cube,cube);
-ALTER EXTENSION cube ADD operator <=(cube,cube);
-ALTER EXTENSION cube ADD operator &&(cube,cube);
-ALTER EXTENSION cube ADD operator <>(cube,cube);
-ALTER EXTENSION cube ADD operator =(cube,cube);
-ALTER EXTENSION cube ADD operator <@(cube,cube);
-ALTER EXTENSION cube ADD operator @>(cube,cube);
-ALTER EXTENSION cube ADD operator ~(cube,cube);
-ALTER EXTENSION cube ADD operator @(cube,cube);
-ALTER EXTENSION cube ADD function g_cube_consistent(internal,cube,integer,oid,internal);
-ALTER EXTENSION cube ADD function g_cube_compress(internal);
-ALTER EXTENSION cube ADD function g_cube_decompress(internal);
-ALTER EXTENSION cube ADD function g_cube_penalty(internal,internal,internal);
-ALTER EXTENSION cube ADD function g_cube_picksplit(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_union(internal,internal);
-ALTER EXTENSION cube ADD function g_cube_same(cube,cube,internal);
-ALTER EXTENSION cube ADD operator family cube_ops using btree;
-ALTER EXTENSION cube ADD operator class cube_ops using btree;
-ALTER EXTENSION cube ADD operator family gist_cube_ops using gist;
-ALTER EXTENSION cube ADD operator class gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD type "cube";
+ALTER EXTENSION "cube" ADD function cube_in(cstring);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[],double precision[]);
+ALTER EXTENSION "cube" ADD function "cube"(double precision[]);
+ALTER EXTENSION "cube" ADD function cube_out("cube");
+ALTER EXTENSION "cube" ADD function cube_eq("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ne("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_lt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_gt("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_le("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_ge("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_cmp("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contains("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_contained("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_overlap("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_union("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_inter("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_size("cube");
+ALTER EXTENSION "cube" ADD function cube_subset("cube",integer[]);
+ALTER EXTENSION "cube" ADD function cube_distance("cube","cube");
+ALTER EXTENSION "cube" ADD function cube_dim("cube");
+ALTER EXTENSION "cube" ADD function cube_ll_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function cube_ur_coord("cube",integer);
+ALTER EXTENSION "cube" ADD function "cube"(double precision);
+ALTER EXTENSION "cube" ADD function "cube"(double precision,double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision);
+ALTER EXTENSION "cube" ADD function "cube"("cube",double precision,double precision);
+ALTER EXTENSION "cube" ADD function cube_is_point("cube");
+ALTER EXTENSION "cube" ADD function cube_enlarge("cube",double precision,integer);
+ALTER EXTENSION "cube" ADD operator >("cube","cube");
+ALTER EXTENSION "cube" ADD operator >=("cube","cube");
+ALTER EXTENSION "cube" ADD operator <("cube","cube");
+ALTER EXTENSION "cube" ADD operator <=("cube","cube");
+ALTER EXTENSION "cube" ADD operator &&("cube","cube");
+ALTER EXTENSION "cube" ADD operator <>("cube","cube");
+ALTER EXTENSION "cube" ADD operator =("cube","cube");
+ALTER EXTENSION "cube" ADD operator <@("cube","cube");
+ALTER EXTENSION "cube" ADD operator @>("cube","cube");
+ALTER EXTENSION "cube" ADD operator ~("cube","cube");
+ALTER EXTENSION "cube" ADD operator @("cube","cube");
+ALTER EXTENSION "cube" ADD function g_cube_consistent(internal,"cube",integer,oid,internal);
+ALTER EXTENSION "cube" ADD function g_cube_compress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_decompress(internal);
+ALTER EXTENSION "cube" ADD function g_cube_penalty(internal,internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_picksplit(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_union(internal,internal);
+ALTER EXTENSION "cube" ADD function g_cube_same("cube","cube",internal);
+ALTER EXTENSION "cube" ADD operator family cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator class cube_ops using btree;
+ALTER EXTENSION "cube" ADD operator family gist_cube_ops using gist;
+ALTER EXTENSION "cube" ADD operator class gist_cube_ops using gist;
diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out
index ca9555e..9422218 100644
--- a/contrib/cube/expected/cube.out
+++ b/contrib/cube/expected/cube.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_1.out b/contrib/cube/expected/cube_1.out
index c07d61d..4f47c54 100644
--- a/contrib/cube/expected/cube_1.out
+++ b/contrib/cube/expected/cube_1.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
   cube   
 ---------
  (1e+27)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (-1e+27)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
   cube   
 ---------
  (1e-07)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (-1e-07)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube          
 ------------------------
  (1.23456789012346e+15)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (-1.23456789012346e+15)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out
index 3767d0e..747e9ba 100644
--- a/contrib/cube/expected/cube_2.out
+++ b/contrib/cube/expected/cube_2.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/expected/cube_3.out b/contrib/cube/expected/cube_3.out
index 2aa42be..33baec1 100644
--- a/contrib/cube/expected/cube_3.out
+++ b/contrib/cube/expected/cube_3.out
@@ -1,552 +1,552 @@
 --
 --  Test cube datatype
 --
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 --
 -- testing the input and output functions
 --
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1'::cube AS cube;
+SELECT '-1'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1.'::cube AS cube;
+SELECT '1.'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.'::cube AS cube;
+SELECT '-1.'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '.1'::cube AS cube;
+SELECT '.1'::"cube" AS "cube";
  cube  
 -------
  (0.1)
 (1 row)
 
-SELECT '-.1'::cube AS cube;
+SELECT '-.1'::"cube" AS "cube";
   cube  
 --------
  (-0.1)
 (1 row)
 
-SELECT '1.0'::cube AS cube;
+SELECT '1.0'::"cube" AS "cube";
  cube 
 ------
  (1)
 (1 row)
 
-SELECT '-1.0'::cube AS cube;
+SELECT '-1.0'::"cube" AS "cube";
  cube 
 ------
  (-1)
 (1 row)
 
-SELECT '1e27'::cube AS cube;
+SELECT '1e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e27'::cube AS cube;
+SELECT '-1e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e27'::cube AS cube;
+SELECT '1.0e27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e27'::cube AS cube;
+SELECT '-1.0e27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e+27'::cube AS cube;
+SELECT '1e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1e+27'::cube AS cube;
+SELECT '-1e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1.0e+27'::cube AS cube;
+SELECT '1.0e+27'::"cube" AS "cube";
    cube   
 ----------
  (1e+027)
 (1 row)
 
-SELECT '-1.0e+27'::cube AS cube;
+SELECT '-1.0e+27'::"cube" AS "cube";
    cube    
 -----------
  (-1e+027)
 (1 row)
 
-SELECT '1e-7'::cube AS cube;
+SELECT '1e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1e-7'::cube AS cube;
+SELECT '-1e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1.0e-7'::cube AS cube;
+SELECT '1.0e-7'::"cube" AS "cube";
    cube   
 ----------
  (1e-007)
 (1 row)
 
-SELECT '-1.0e-7'::cube AS cube;
+SELECT '-1.0e-7'::"cube" AS "cube";
    cube    
 -----------
  (-1e-007)
 (1 row)
 
-SELECT '1e-700'::cube AS cube;
+SELECT '1e-700'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '-1e-700'::cube AS cube;
+SELECT '-1e-700'::"cube" AS "cube";
  cube 
 ------
  (-0)
 (1 row)
 
-SELECT '1234567890123456'::cube AS cube;
+SELECT '1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '+1234567890123456'::cube AS cube;
+SELECT '+1234567890123456'::"cube" AS "cube";
           cube           
 -------------------------
  (1.23456789012346e+015)
 (1 row)
 
-SELECT '-1234567890123456'::cube AS cube;
+SELECT '-1234567890123456'::"cube" AS "cube";
            cube           
 --------------------------
  (-1.23456789012346e+015)
 (1 row)
 
-SELECT '.1234567890123456'::cube AS cube;
+SELECT '.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '+.1234567890123456'::cube AS cube;
+SELECT '+.1234567890123456'::"cube" AS "cube";
         cube         
 ---------------------
  (0.123456789012346)
 (1 row)
 
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '-.1234567890123456'::"cube" AS "cube";
          cube         
 ----------------------
  (-0.123456789012346)
 (1 row)
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '(1,2)'::cube AS cube;
+SELECT '(1,2)'::"cube" AS "cube";
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT '1,2,3,4,5'::cube AS cube;
+SELECT '1,2,3,4,5'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
       cube       
 -----------------
  (1, 2, 3, 4, 5)
 (1 row)
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '(0),(1)'::cube AS cube;
+SELECT '(0),(1)'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '[(0),(0)]'::cube AS cube;
+SELECT '[(0),(0)]'::"cube" AS "cube";
  cube 
 ------
  (0)
 (1 row)
 
-SELECT '[(0),(1)]'::cube AS cube;
+SELECT '[(0),(1)]'::"cube" AS "cube";
   cube   
 ---------
  (0),(1)
 (1 row)
 
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
      cube     
 --------------
  (0, 0, 0, 0)
 (1 row)
 
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
            cube            
 ---------------------------
  (0, 0, 0, 0),(1, 0, 0, 0)
 (1 row)
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
+SELECT ''::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT ''::cube AS cube;
+LINE 1: SELECT ''::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT 'ABC'::cube AS cube;
+SELECT 'ABC'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT 'ABC'::cube AS cube;
+LINE 1: SELECT 'ABC'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "A"
-SELECT '()'::cube AS cube;
+SELECT '()'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '()'::cube AS cube;
+LINE 1: SELECT '()'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[]'::cube AS cube;
+SELECT '[]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[]'::cube AS cube;
+LINE 1: SELECT '[]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[()]'::cube AS cube;
+SELECT '[()]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[()]'::cube AS cube;
+LINE 1: SELECT '[()]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '[(1)]'::cube AS cube;
+SELECT '[(1)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1)]'::cube AS cube;
+LINE 1: SELECT '[(1)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),]'::cube AS cube;
+SELECT '[(1),]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),]'::cube AS cube;
+LINE 1: SELECT '[(1),]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "]"
-SELECT '[(1),2]'::cube AS cube;
+SELECT '[(1),2]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),2]'::cube AS cube;
+LINE 1: SELECT '[(1),2]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "2"
-SELECT '[(1),(2),(3)]'::cube AS cube;
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2),(3)]'::cube AS cube;
+LINE 1: SELECT '[(1),(2),(3)]'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '1,'::cube AS cube;
+SELECT '1,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,'::cube AS cube;
+LINE 1: SELECT '1,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,2,'::cube AS cube;
+SELECT '1,2,'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2,'::cube AS cube;
+LINE 1: SELECT '1,2,'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at end of input
-SELECT '1,,2'::cube AS cube;
+SELECT '1,,2'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '1,,2'::cube AS cube;
+LINE 1: SELECT '1,,2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,)'::cube AS cube;
+SELECT '(1,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,)'::cube AS cube;
+LINE 1: SELECT '(1,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,2,)'::cube AS cube;
+SELECT '(1,2,)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,)'::cube AS cube;
+LINE 1: SELECT '(1,2,)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ")"
-SELECT '(1,,2)'::cube AS cube;
+SELECT '(1,,2)'::"cube" AS "cube";
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,,2)'::cube AS cube;
+LINE 1: SELECT '(1,,2)'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1),(2)],'::cube AS cube;
+LINE 1: SELECT '[(1),(2)],'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2,3),(2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
 ERROR:  bad cube representation
-LINE 1: SELECT '[(1,2),(1,2,3)]'::cube AS cube;
+LINE 1: SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1),(2),'::cube AS cube; -- 2
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
 ERROR:  bad cube representation
-LINE 1: SELECT '(1),(2),'::cube AS cube;
+LINE 1: SELECT '(1),(2),'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ","
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3),(2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2,3),(2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2,3) and (2,3).
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2),(1,2,3)'::cube AS cube;
+LINE 1: SELECT '(1,2),(1,2,3)'::"cube" AS "cube";
                ^
 DETAIL:  Different point dimensions in (1,2) and (1,2,3).
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)ab'::cube AS cube;
+LINE 1: SELECT '(1,2,3)ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2,3)a'::cube AS cube; -- 5
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2,3)a'::cube AS cube;
+LINE 1: SELECT '(1,2,3)a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '(1,2)('::cube AS cube; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
 ERROR:  bad cube representation
-LINE 1: SELECT '(1,2)('::cube AS cube;
+LINE 1: SELECT '(1,2)('::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "("
-SELECT '1,2ab'::cube AS cube; -- 6
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2ab'::cube AS cube;
+LINE 1: SELECT '1,2ab'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1 e7'::cube AS cube; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
 ERROR:  bad cube representation
-LINE 1: SELECT '1 e7'::cube AS cube;
+LINE 1: SELECT '1 e7'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "e"
-SELECT '1,2a'::cube AS cube; -- 7
+SELECT '1,2a'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1,2a'::cube AS cube;
+LINE 1: SELECT '1,2a'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near "a"
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 ERROR:  bad cube representation
-LINE 1: SELECT '1..2'::cube AS cube;
+LINE 1: SELECT '1..2'::"cube" AS "cube";
                ^
 DETAIL:  syntax error at or near ".2"
 --
 -- Testing building cubes from float8 values
 --
-SELECT cube(0::float8);
+SELECT "cube"(0::float8);
  cube 
 ------
  (0)
 (1 row)
 
-SELECT cube(1::float8);
+SELECT "cube"(1::float8);
  cube 
 ------
  (1)
 (1 row)
 
-SELECT cube(1,2);
+SELECT "cube"(1,2);
   cube   
 ---------
  (1),(2)
 (1 row)
 
-SELECT cube(cube(1,2),3);
+SELECT "cube"("cube"(1,2),3);
      cube      
 ---------------
  (1, 3),(2, 3)
 (1 row)
 
-SELECT cube(cube(1,2),3,4);
+SELECT "cube"("cube"(1,2),3,4);
      cube      
 ---------------
  (1, 3),(2, 4)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 5)
 (1 row)
 
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
         cube         
 ---------------------
  (1, 3, 5),(2, 4, 6)
 (1 row)
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
  cube 
 ------
  (0)
 (1 row)
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
         cube         
 ---------------------
  (0, 1, 2),(3, 4, 5)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
 ERROR:  UR and LL arrays must be of same length
-SELECT cube(NULL::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
  cube 
 ------
  
 (1 row)
 
-SELECT cube('{0,1,2}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
         cube_subset        
 ---------------------------
  (5, 3, 1, 1),(8, 7, 6, 6)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
  cube_subset  
 --------------
  (5, 3, 1, 1)
 (1 row)
 
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 ERROR:  Index out of bounds
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
   cube  
 --------
  (1, 2)
 (1 row)
 
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
    cube    
 -----------
  (0, 1, 2)
 (1 row)
 
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
      cube     
 --------------
  (5, 6, 7, 8)
 (1 row)
 
-SELECT cube(1.37); -- cube_f8
+SELECT "cube"(1.37); -- cube_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
   cube  
 --------
  (1.37)
 (1 row)
 
-SELECT cube(cube(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
   cube   
 ---------
  (1, 42)
 (1 row)
 
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(1, 24)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 42)
 (1 row)
 
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
       cube       
 -----------------
  (1, 42),(2, 24)
@@ -555,12 +555,12 @@ SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
 DETAIL:  A cube cannot have more than 100 dimensions.
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 ERROR:  bad cube representation
 LINE 1: select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0...
                ^
@@ -570,37 +570,37 @@ DETAIL:  A cube cannot have more than 100 dimensions.
 --
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -609,97 +609,97 @@ SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1'::cube   < '2'::cube AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1,1'::cube > '1,2'::cube AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,1'::cube < '1,2'::cube AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
  bool 
 ------
  f
@@ -707,235 +707,235 @@ SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
  bool 
 ------
  f
 (1 row)
 
--- "contains" (the left operand is the cube that entirely encloses the
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
  bool 
 ------
  t
 (1 row)
 
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
 (1 row)
 
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
  bool 
 ------
  f
@@ -943,77 +943,77 @@ SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
  cube_distance 
 ---------------
              4
 (1 row)
 
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
  cube_distance 
 ---------------
            0.5
 (1 row)
 
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
  cube_distance 
 ---------------
              0
 (1 row)
 
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
  cube_distance 
 ---------------
            190
 (1 row)
 
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
   cube_distance   
 ------------------
  140.762210837994
 (1 row)
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
+SELECT "cube"('(1,1.2)'::text);
    cube   
 ----------
  (1, 1.2)
 (1 row)
 
-SELECT cube(NULL);
+SELECT "cube"(NULL);
  cube 
 ------
  
 (1 row)
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
+SELECT cube_dim('(0)'::"cube");
  cube_dim 
 ----------
         1
 (1 row)
 
-SELECT cube_dim('(0,0)'::cube);
+SELECT cube_dim('(0,0)'::"cube");
  cube_dim 
 ----------
         2
 (1 row)
 
-SELECT cube_dim('(0,0,0)'::cube);
+SELECT cube_dim('(0,0,0)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
  cube_dim 
 ----------
         3
 (1 row)
 
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
  cube_dim 
 ----------
         5
@@ -1021,55 +1021,55 @@ SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ll_coord 
 ---------------
             -1
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ll_coord 
 ---------------
             -2
 (1 row)
 
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
  cube_ll_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
  cube_ll_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
  cube_ll_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
  cube_ll_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
  cube_ll_coord 
 ---------------
              0
@@ -1077,55 +1077,55 @@ SELECT cube_ll_coord('(42,137)'::cube, 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
  cube_ur_coord 
 ---------------
              1
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
  cube_ur_coord 
 ---------------
              2
 (1 row)
 
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
  cube_ur_coord 
 ---------------
             42
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
  cube_ur_coord 
 ---------------
            137
 (1 row)
 
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
  cube_ur_coord 
 ---------------
              0
@@ -1133,37 +1133,37 @@ SELECT cube_ur_coord('(42,137)'::cube, 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
+SELECT cube_is_point('(0)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
  cube_is_point 
 ---------------
  t
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
  cube_is_point 
 ---------------
  f
 (1 row)
 
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
  cube_is_point 
 ---------------
  f
@@ -1171,121 +1171,121 @@ SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 0, 2);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
  cube_enlarge 
 --------------
  (-2),(2)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, 1, 2);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-1, -1),(1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
          cube_enlarge          
 -------------------------------
  (-3, -1, -1, -1),(3, 1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(0)'::cube, -1, 2);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
  cube_enlarge 
 --------------
  (0)
 (1 row)
 
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
  cube_enlarge 
 --------------
  (-1),(1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
       cube_enlarge      
 ------------------------
  (-1, -1, -1),(1, 1, 1)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
   cube_enlarge   
 -----------------
  (-4, -3),(3, 8)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
    cube_enlarge   
 ------------------
  (-6, -5),(5, 10)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
   cube_enlarge   
 -----------------
  (-2, -1),(1, 6)
 (1 row)
 
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
     cube_enlarge     
 ---------------------
  (-0.5, 1),(-0.5, 4)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
 (1 row)
 
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
  cube_enlarge 
 --------------
  (42, 0, 0)
@@ -1293,31 +1293,31 @@ SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
       cube_union      
 ----------------------
  (1, 2, 0),(8, 9, 10)
 (1 row)
 
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
         cube_union         
 ---------------------------
  (1, 2, 0, 0),(4, 2, 0, 0)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
   cube_union   
 ---------------
  (1, 2),(4, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
  cube_union 
 ------------
  (1, 2)
 (1 row)
 
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
  cube_union 
 ------------
  (1, 2, 0)
@@ -1325,43 +1325,43 @@ SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
    cube_inter    
 -----------------
  (3, 4),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
   cube_inter   
 ---------------
  (3, 4),(6, 5)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
     cube_inter     
 -------------------
  (13, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
     cube_inter    
 ------------------
  (3, 14),(10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
  cube_inter 
 ------------
  (10, 11)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
  cube_inter 
 ------------
  (1, 2, 3)
 (1 row)
 
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
      cube_inter      
 ---------------------
  (5, 6, 3),(1, 2, 3)
@@ -1369,13 +1369,13 @@ SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
  cube_size 
 -----------
         88
 (1 row)
 
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(42,137)'::"cube");
  cube_size 
 -----------
          0
@@ -1383,7 +1383,7 @@ SELECT cube_size('(42,137)'::cube);
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 \copy test_cube from 'data/test_cube.data'
 CREATE INDEX test_cube_ix ON test_cube USING gist (c);
 SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;
diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql
index d58974c..da80472 100644
--- a/contrib/cube/sql/cube.sql
+++ b/contrib/cube/sql/cube.sql
@@ -2,141 +2,141 @@
 --  Test cube datatype
 --
 
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 
 --
 -- testing the input and output functions
 --
 
 -- Any number (a one-dimensional point)
-SELECT '1'::cube AS cube;
-SELECT '-1'::cube AS cube;
-SELECT '1.'::cube AS cube;
-SELECT '-1.'::cube AS cube;
-SELECT '.1'::cube AS cube;
-SELECT '-.1'::cube AS cube;
-SELECT '1.0'::cube AS cube;
-SELECT '-1.0'::cube AS cube;
-SELECT '1e27'::cube AS cube;
-SELECT '-1e27'::cube AS cube;
-SELECT '1.0e27'::cube AS cube;
-SELECT '-1.0e27'::cube AS cube;
-SELECT '1e+27'::cube AS cube;
-SELECT '-1e+27'::cube AS cube;
-SELECT '1.0e+27'::cube AS cube;
-SELECT '-1.0e+27'::cube AS cube;
-SELECT '1e-7'::cube AS cube;
-SELECT '-1e-7'::cube AS cube;
-SELECT '1.0e-7'::cube AS cube;
-SELECT '-1.0e-7'::cube AS cube;
-SELECT '1e-700'::cube AS cube;
-SELECT '-1e-700'::cube AS cube;
-SELECT '1234567890123456'::cube AS cube;
-SELECT '+1234567890123456'::cube AS cube;
-SELECT '-1234567890123456'::cube AS cube;
-SELECT '.1234567890123456'::cube AS cube;
-SELECT '+.1234567890123456'::cube AS cube;
-SELECT '-.1234567890123456'::cube AS cube;
+SELECT '1'::"cube" AS "cube";
+SELECT '-1'::"cube" AS "cube";
+SELECT '1.'::"cube" AS "cube";
+SELECT '-1.'::"cube" AS "cube";
+SELECT '.1'::"cube" AS "cube";
+SELECT '-.1'::"cube" AS "cube";
+SELECT '1.0'::"cube" AS "cube";
+SELECT '-1.0'::"cube" AS "cube";
+SELECT '1e27'::"cube" AS "cube";
+SELECT '-1e27'::"cube" AS "cube";
+SELECT '1.0e27'::"cube" AS "cube";
+SELECT '-1.0e27'::"cube" AS "cube";
+SELECT '1e+27'::"cube" AS "cube";
+SELECT '-1e+27'::"cube" AS "cube";
+SELECT '1.0e+27'::"cube" AS "cube";
+SELECT '-1.0e+27'::"cube" AS "cube";
+SELECT '1e-7'::"cube" AS "cube";
+SELECT '-1e-7'::"cube" AS "cube";
+SELECT '1.0e-7'::"cube" AS "cube";
+SELECT '-1.0e-7'::"cube" AS "cube";
+SELECT '1e-700'::"cube" AS "cube";
+SELECT '-1e-700'::"cube" AS "cube";
+SELECT '1234567890123456'::"cube" AS "cube";
+SELECT '+1234567890123456'::"cube" AS "cube";
+SELECT '-1234567890123456'::"cube" AS "cube";
+SELECT '.1234567890123456'::"cube" AS "cube";
+SELECT '+.1234567890123456'::"cube" AS "cube";
+SELECT '-.1234567890123456'::"cube" AS "cube";
 
 -- simple lists (points)
-SELECT '1,2'::cube AS cube;
-SELECT '(1,2)'::cube AS cube;
-SELECT '1,2,3,4,5'::cube AS cube;
-SELECT '(1,2,3,4,5)'::cube AS cube;
+SELECT '1,2'::"cube" AS "cube";
+SELECT '(1,2)'::"cube" AS "cube";
+SELECT '1,2,3,4,5'::"cube" AS "cube";
+SELECT '(1,2,3,4,5)'::"cube" AS "cube";
 
 -- double lists (cubes)
-SELECT '(0),(0)'::cube AS cube;
-SELECT '(0),(1)'::cube AS cube;
-SELECT '[(0),(0)]'::cube AS cube;
-SELECT '[(0),(1)]'::cube AS cube;
-SELECT '(0,0,0,0),(0,0,0,0)'::cube AS cube;
-SELECT '(0,0,0,0),(1,0,0,0)'::cube AS cube;
-SELECT '[(0,0,0,0),(0,0,0,0)]'::cube AS cube;
-SELECT '[(0,0,0,0),(1,0,0,0)]'::cube AS cube;
+SELECT '(0),(0)'::"cube" AS "cube";
+SELECT '(0),(1)'::"cube" AS "cube";
+SELECT '[(0),(0)]'::"cube" AS "cube";
+SELECT '[(0),(1)]'::"cube" AS "cube";
+SELECT '(0,0,0,0),(0,0,0,0)'::"cube" AS "cube";
+SELECT '(0,0,0,0),(1,0,0,0)'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(0,0,0,0)]'::"cube" AS "cube";
+SELECT '[(0,0,0,0),(1,0,0,0)]'::"cube" AS "cube";
 
 -- invalid input: parse errors
-SELECT ''::cube AS cube;
-SELECT 'ABC'::cube AS cube;
-SELECT '()'::cube AS cube;
-SELECT '[]'::cube AS cube;
-SELECT '[()]'::cube AS cube;
-SELECT '[(1)]'::cube AS cube;
-SELECT '[(1),]'::cube AS cube;
-SELECT '[(1),2]'::cube AS cube;
-SELECT '[(1),(2),(3)]'::cube AS cube;
-SELECT '1,'::cube AS cube;
-SELECT '1,2,'::cube AS cube;
-SELECT '1,,2'::cube AS cube;
-SELECT '(1,)'::cube AS cube;
-SELECT '(1,2,)'::cube AS cube;
-SELECT '(1,,2)'::cube AS cube;
+SELECT ''::"cube" AS "cube";
+SELECT 'ABC'::"cube" AS "cube";
+SELECT '()'::"cube" AS "cube";
+SELECT '[]'::"cube" AS "cube";
+SELECT '[()]'::"cube" AS "cube";
+SELECT '[(1)]'::"cube" AS "cube";
+SELECT '[(1),]'::"cube" AS "cube";
+SELECT '[(1),2]'::"cube" AS "cube";
+SELECT '[(1),(2),(3)]'::"cube" AS "cube";
+SELECT '1,'::"cube" AS "cube";
+SELECT '1,2,'::"cube" AS "cube";
+SELECT '1,,2'::"cube" AS "cube";
+SELECT '(1,)'::"cube" AS "cube";
+SELECT '(1,2,)'::"cube" AS "cube";
+SELECT '(1,,2)'::"cube" AS "cube";
 
 -- invalid input: semantic errors and trailing garbage
-SELECT '[(1),(2)],'::cube AS cube; -- 0
-SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1
-SELECT '[(1,2),(1,2,3)]'::cube AS cube; -- 1
-SELECT '(1),(2),'::cube AS cube; -- 2
-SELECT '(1,2,3),(2,3)'::cube AS cube; -- 3
-SELECT '(1,2),(1,2,3)'::cube AS cube; -- 3
-SELECT '(1,2,3)ab'::cube AS cube; -- 4
-SELECT '(1,2,3)a'::cube AS cube; -- 5
-SELECT '(1,2)('::cube AS cube; -- 5
-SELECT '1,2ab'::cube AS cube; -- 6
-SELECT '1 e7'::cube AS cube; -- 6
-SELECT '1,2a'::cube AS cube; -- 7
-SELECT '1..2'::cube AS cube; -- 7
+SELECT '[(1),(2)],'::"cube" AS "cube"; -- 0
+SELECT '[(1,2,3),(2,3)]'::"cube" AS "cube"; -- 1
+SELECT '[(1,2),(1,2,3)]'::"cube" AS "cube"; -- 1
+SELECT '(1),(2),'::"cube" AS "cube"; -- 2
+SELECT '(1,2,3),(2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2),(1,2,3)'::"cube" AS "cube"; -- 3
+SELECT '(1,2,3)ab'::"cube" AS "cube"; -- 4
+SELECT '(1,2,3)a'::"cube" AS "cube"; -- 5
+SELECT '(1,2)('::"cube" AS "cube"; -- 5
+SELECT '1,2ab'::"cube" AS "cube"; -- 6
+SELECT '1 e7'::"cube" AS "cube"; -- 6
+SELECT '1,2a'::"cube" AS "cube"; -- 7
+SELECT '1..2'::"cube" AS "cube"; -- 7
 
 --
 -- Testing building cubes from float8 values
 --
 
-SELECT cube(0::float8);
-SELECT cube(1::float8);
-SELECT cube(1,2);
-SELECT cube(cube(1,2),3);
-SELECT cube(cube(1,2),3,4);
-SELECT cube(cube(cube(1,2),3,4),5);
-SELECT cube(cube(cube(1,2),3,4),5,6);
+SELECT "cube"(0::float8);
+SELECT "cube"(1::float8);
+SELECT "cube"(1,2);
+SELECT "cube"("cube"(1,2),3);
+SELECT "cube"("cube"(1,2),3,4);
+SELECT "cube"("cube"("cube"(1,2),3,4),5);
+SELECT "cube"("cube"("cube"(1,2),3,4),5,6);
 
 --
--- Test that the text -> cube cast was installed.
+-- Test that the text -> "cube" cast was installed.
 --
 
-SELECT '(0)'::text::cube;
+SELECT '(0)'::text::"cube";
 
 --
--- Test the float[] -> cube cast
+-- Test the float[] -> "cube" cast
 --
-SELECT cube('{0,1,2}'::float[], '{3,4,5}'::float[]);
-SELECT cube('{0,1,2}'::float[], '{3}'::float[]);
-SELECT cube(NULL::float[], '{3}'::float[]);
-SELECT cube('{0,1,2}'::float[]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
-SELECT cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[4,0]);
-SELECT cube_subset(cube('(6,7,8),(6,7,8)'), ARRAY[4,0]);
+SELECT "cube"('{0,1,2}'::float[], '{3,4,5}'::float[]);
+SELECT "cube"('{0,1,2}'::float[], '{3}'::float[]);
+SELECT "cube"(NULL::float[], '{3}'::float[]);
+SELECT "cube"('{0,1,2}'::float[]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(1,3,5)'), ARRAY[3,2,1,1]);
+SELECT cube_subset("cube"('(1,3,5),(6,7,8)'), ARRAY[4,0]);
+SELECT cube_subset("cube"('(6,7,8),(6,7,8)'), ARRAY[4,0]);
 
 --
 -- Test point processing
 --
-SELECT cube('(1,2),(1,2)'); -- cube_in
-SELECT cube('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
-SELECT cube('{5,6,7,8}'::float[]); -- cube_a_f8
-SELECT cube(1.37); -- cube_f8
-SELECT cube(1.37, 1.37); -- cube_f8_f8
-SELECT cube(cube(1,1), 42); -- cube_c_f8
-SELECT cube(cube(1,2), 42); -- cube_c_f8
-SELECT cube(cube(1,1), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,1), 42, 24); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 42); -- cube_c_f8_f8
-SELECT cube(cube(1,2), 42, 24); -- cube_c_f8_f8
+SELECT "cube"('(1,2),(1,2)'); -- cube_in
+SELECT "cube"('{0,1,2}'::float[], '{0,1,2}'::float[]); -- cube_a_f8_f8
+SELECT "cube"('{5,6,7,8}'::float[]); -- cube_a_f8
+SELECT "cube"(1.37); -- cube_f8
+SELECT "cube"(1.37, 1.37); -- cube_f8_f8
+SELECT "cube"("cube"(1,1), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,2), 42); -- cube_c_f8
+SELECT "cube"("cube"(1,1), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,1), 42, 24); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 42); -- cube_c_f8_f8
+SELECT "cube"("cube"(1,2), 42, 24); -- cube_c_f8_f8
 
 --
 -- Testing limit of CUBE_MAX_DIM dimensions check in cube_in.
 --
 
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
-select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::cube;
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
+select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)'::"cube";
 
 --
 -- testing the  operators
@@ -144,190 +144,190 @@ select '(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
 
 -- equality/inequality:
 --
-SELECT '24, 33.20'::cube    =  '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.20'::cube AS bool;
-SELECT '24, 33.20'::cube    =  '24, 33.21'::cube AS bool;
-SELECT '24, 33.20'::cube    != '24, 33.21'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube  =  '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.20'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    =  '24, 33.21'::"cube" AS bool;
+SELECT '24, 33.20'::"cube"    != '24, 33.21'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"  =  '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
 
 -- "lower than" / "greater than"
 -- (these operators are not useful for anything but ordering)
 --
-SELECT '1'::cube   > '2'::cube AS bool;
-SELECT '1'::cube   < '2'::cube AS bool;
-SELECT '1,1'::cube > '1,2'::cube AS bool;
-SELECT '1,1'::cube < '1,2'::cube AS bool;
-
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,1)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,1),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             > '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0),(3,1)'::cube             < '(2,0,0,0,0),(3,1,0,0,0)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,1)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,1),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube > '(2,0),(3,1)'::cube AS bool;
-SELECT '(2,0,0,0,0),(3,1,0,0,0)'::cube < '(2,0),(3,1)'::cube AS bool;
+SELECT '1'::"cube"   > '2'::"cube" AS bool;
+SELECT '1'::"cube"   < '2'::"cube" AS bool;
+SELECT '1,1'::"cube" > '1,2'::"cube" AS bool;
+SELECT '1,1'::"cube" < '1,2'::"cube" AS bool;
+
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,1)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,1),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             > '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0),(3,1)'::"cube"             < '(2,0,0,0,0),(3,1,0,0,0)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,1)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,1),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" > '(2,0),(3,1)'::"cube" AS bool;
+SELECT '(2,0,0,0,0),(3,1,0,0,0)'::"cube" < '(2,0),(3,1)'::"cube" AS bool;
 
 
 -- "overlap"
 --
-SELECT '1'::cube && '1'::cube AS bool;
-SELECT '1'::cube && '2'::cube AS bool;
+SELECT '1'::"cube" && '1'::"cube" AS bool;
+SELECT '1'::"cube" && '2'::"cube" AS bool;
 
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '0'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '1,1,1'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1,1),(2,2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(1,1),(2,2)]'::cube AS bool;
-SELECT '[(-1,-1,-1),(1,1,1)]'::cube && '[(2,1,1),(2,2,2)]'::cube AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '0'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '1,1,1'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1,1),(2,2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(1,1),(2,2)]'::"cube" AS bool;
+SELECT '[(-1,-1,-1),(1,1,1)]'::"cube" && '[(2,1,1),(2,2,2)]'::"cube" AS bool;
 
 
--- "contained in" (the left operand is the cube entirely enclosed by
+-- "contained in" (the left operand is the "cube" entirely enclosed by
 -- the right operand):
 --
-SELECT '0'::cube                 <@ '0'::cube                        AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,0'::cube                    AS bool;
-SELECT '0,0'::cube               <@ '0,0,1'::cube                    AS bool;
-SELECT '0,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '1,0,0'::cube             <@ '0,0,1'::cube                    AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(1,0,0),(0,0,1)'::cube          AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1),(1,1,1)'::cube       AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube   <@ '(-1,-1,-1,-1),(1,1,1,1)'::cube  AS bool;
-SELECT '0'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '1'::cube                 <@ '(-1),(1)'::cube                 AS bool;
-SELECT '-1'::cube                <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-1),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1),(1)'::cube                 AS bool;
-SELECT '(-2),(1)'::cube          <@ '(-1,-1),(1,1)'::cube            AS bool;
-
-
--- "contains" (the left operand is the cube that entirely encloses the
+SELECT '0'::"cube"                 <@ '0'::"cube"                        AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,0'::"cube"                    AS bool;
+SELECT '0,0'::"cube"               <@ '0,0,1'::"cube"                    AS bool;
+SELECT '0,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '1,0,0'::"cube"             <@ '0,0,1'::"cube"                    AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(1,0,0),(0,0,1)'::"cube"          AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1),(1,1,1)'::"cube"       AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"   <@ '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  AS bool;
+SELECT '0'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '1'::"cube"                 <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '-1'::"cube"                <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1),(1)'::"cube"                 AS bool;
+SELECT '(-2),(1)'::"cube"          <@ '(-1,-1),(1,1)'::"cube"            AS bool;
+
+
+-- "contains" (the left operand is the "cube" that entirely encloses the
 -- right operand)
 --
-SELECT '0'::cube                        @> '0'::cube                 AS bool;
-SELECT '0,0,0'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '0,0'::cube               AS bool;
-SELECT '0,0,1'::cube                    @> '0,0,0'::cube             AS bool;
-SELECT '0,0,1'::cube                    @> '1,0,0'::cube             AS bool;
-SELECT '(1,0,0),(0,0,1)'::cube          @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1),(1,1,1)'::cube       @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1,-1,-1,-1),(1,1,1,1)'::cube  @> '(1,0,0),(0,0,1)'::cube   AS bool;
-SELECT '(-1),(1)'::cube                 @> '0'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '1'::cube                 AS bool;
-SELECT '(-1),(1)'::cube                 @> '-1'::cube                AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-1),(1)'::cube          AS bool;
-SELECT '(-1),(1)'::cube                 @> '(-2),(1)'::cube          AS bool;
-SELECT '(-1,-1),(1,1)'::cube            @> '(-2),(1)'::cube          AS bool;
+SELECT '0'::"cube"                        @> '0'::"cube"                 AS bool;
+SELECT '0,0,0'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0'::"cube"               AS bool;
+SELECT '0,0,1'::"cube"                    @> '0,0,0'::"cube"             AS bool;
+SELECT '0,0,1'::"cube"                    @> '1,0,0'::"cube"             AS bool;
+SELECT '(1,0,0),(0,0,1)'::"cube"          @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1),(1,1,1)'::"cube"       @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1,-1,-1,-1),(1,1,1,1)'::"cube"  @> '(1,0,0),(0,0,1)'::"cube"   AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '0'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '1'::"cube"                 AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '-1'::"cube"                AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-1),(1)'::"cube"          AS bool;
+SELECT '(-1),(1)'::"cube"                 @> '(-2),(1)'::"cube"          AS bool;
+SELECT '(-1,-1),(1,1)'::"cube"            @> '(-2),(1)'::"cube"          AS bool;
 
 -- Test of distance function
 --
-SELECT cube_distance('(0)'::cube,'(2,2,2,2)'::cube);
-SELECT cube_distance('(0)'::cube,'(.3,.4)'::cube);
-SELECT cube_distance('(2,3,4)'::cube,'(2,3,4)'::cube);
-SELECT cube_distance('(42,42,42,42)'::cube,'(137,137,137,137)'::cube);
-SELECT cube_distance('(42,42,42)'::cube,'(137,137)'::cube);
+SELECT cube_distance('(0)'::"cube",'(2,2,2,2)'::"cube");
+SELECT cube_distance('(0)'::"cube",'(.3,.4)'::"cube");
+SELECT cube_distance('(2,3,4)'::"cube",'(2,3,4)'::"cube");
+SELECT cube_distance('(42,42,42,42)'::"cube",'(137,137,137,137)'::"cube");
+SELECT cube_distance('(42,42,42)'::"cube",'(137,137)'::"cube");
 
--- Test of cube function (text to cube)
+-- Test of "cube" function (text to "cube")
 --
-SELECT cube('(1,1.2)'::text);
-SELECT cube(NULL);
+SELECT "cube"('(1,1.2)'::text);
+SELECT "cube"(NULL);
 
--- Test of cube_dim function (dimensions stored in cube)
+-- Test of cube_dim function (dimensions stored in "cube")
 --
-SELECT cube_dim('(0)'::cube);
-SELECT cube_dim('(0,0)'::cube);
-SELECT cube_dim('(0,0,0)'::cube);
-SELECT cube_dim('(42,42,42),(42,42,42)'::cube);
-SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::cube);
+SELECT cube_dim('(0)'::"cube");
+SELECT cube_dim('(0,0)'::"cube");
+SELECT cube_dim('(0,0,0)'::"cube");
+SELECT cube_dim('(42,42,42),(42,42,42)'::"cube");
+SELECT cube_dim('(4,8,15,16,23),(4,8,15,16,23)'::"cube");
 
 -- Test of cube_ll_coord function (retrieves LL coodinate values)
 --
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ll_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ll_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ll_coord('(42,137)'::cube, 1);
-SELECT cube_ll_coord('(42,137)'::cube, 2);
-SELECT cube_ll_coord('(42,137)'::cube, 3);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ll_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ll_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ll_coord('(42,137)'::"cube", 1);
+SELECT cube_ll_coord('(42,137)'::"cube", 2);
+SELECT cube_ll_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_ur_coord function (retrieves UR coodinate values)
 --
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 1);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 2);
-SELECT cube_ur_coord('(-1,1),(2,-2)'::cube, 3);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 1);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 2);
-SELECT cube_ur_coord('(1,2),(1,2)'::cube, 3);
-SELECT cube_ur_coord('(42,137)'::cube, 1);
-SELECT cube_ur_coord('(42,137)'::cube, 2);
-SELECT cube_ur_coord('(42,137)'::cube, 3);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 1);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 2);
+SELECT cube_ur_coord('(-1,1),(2,-2)'::"cube", 3);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 1);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 2);
+SELECT cube_ur_coord('(1,2),(1,2)'::"cube", 3);
+SELECT cube_ur_coord('(42,137)'::"cube", 1);
+SELECT cube_ur_coord('(42,137)'::"cube", 2);
+SELECT cube_ur_coord('(42,137)'::"cube", 3);
 
 -- Test of cube_is_point
 --
-SELECT cube_is_point('(0)'::cube);
-SELECT cube_is_point('(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(-1,1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,-1,2)'::cube);
-SELECT cube_is_point('(0,1,2),(0,1,-2)'::cube);
+SELECT cube_is_point('(0)'::"cube");
+SELECT cube_is_point('(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(-1,1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,-1,2)'::"cube");
+SELECT cube_is_point('(0,1,2),(0,1,-2)'::"cube");
 
 -- Test of cube_enlarge (enlarging and shrinking cubes)
 --
-SELECT cube_enlarge('(0)'::cube, 0, 0);
-SELECT cube_enlarge('(0)'::cube, 0, 1);
-SELECT cube_enlarge('(0)'::cube, 0, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 0, 4);
-SELECT cube_enlarge('(0)'::cube, 1, 0);
-SELECT cube_enlarge('(0)'::cube, 1, 1);
-SELECT cube_enlarge('(0)'::cube, 1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, 1, 4);
-SELECT cube_enlarge('(0)'::cube, -1, 0);
-SELECT cube_enlarge('(0)'::cube, -1, 1);
-SELECT cube_enlarge('(0)'::cube, -1, 2);
-SELECT cube_enlarge('(2),(-2)'::cube, -1, 4);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 0);
-SELECT cube_enlarge('(0,0,0)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, 3, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -1, 2);
-SELECT cube_enlarge('(2,-2),(-3,7)'::cube, -3, 2);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -23, 5);
-SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::cube, -24, 5);
+SELECT cube_enlarge('(0)'::"cube", 0, 0);
+SELECT cube_enlarge('(0)'::"cube", 0, 1);
+SELECT cube_enlarge('(0)'::"cube", 0, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 0, 4);
+SELECT cube_enlarge('(0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0)'::"cube", 1, 1);
+SELECT cube_enlarge('(0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", 1, 4);
+SELECT cube_enlarge('(0)'::"cube", -1, 0);
+SELECT cube_enlarge('(0)'::"cube", -1, 1);
+SELECT cube_enlarge('(0)'::"cube", -1, 2);
+SELECT cube_enlarge('(2),(-2)'::"cube", -1, 4);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 0);
+SELECT cube_enlarge('(0,0,0)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", 3, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -1, 2);
+SELECT cube_enlarge('(2,-2),(-3,7)'::"cube", -3, 2);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -23, 5);
+SELECT cube_enlarge('(42,-23,-23),(42,23,23)'::"cube", -24, 5);
 
 -- Test of cube_union (MBR for two cubes)
 --
-SELECT cube_union('(1,2),(3,4)'::cube, '(5,6,7),(8,9,10)'::cube);
-SELECT cube_union('(1,2)'::cube, '(4,2,0,0)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(4,2),(4,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2),(1,2)'::cube);
-SELECT cube_union('(1,2),(1,2)'::cube, '(1,2,0),(1,2,0)'::cube);
+SELECT cube_union('(1,2),(3,4)'::"cube", '(5,6,7),(8,9,10)'::"cube");
+SELECT cube_union('(1,2)'::"cube", '(4,2,0,0)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(4,2),(4,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2),(1,2)'::"cube");
+SELECT cube_union('(1,2),(1,2)'::"cube", '(1,2,0),(1,2,0)'::"cube");
 
 -- Test of cube_inter
 --
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (16,15)'::cube); -- intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,4), (6,5)'::cube); -- includes
-SELECT cube_inter('(1,2),(10,11)'::cube, '(13,14), (16,15)'::cube); -- no intersection
-SELECT cube_inter('(1,2),(10,11)'::cube, '(3,14), (16,15)'::cube); -- no intersection, but one dimension intersects
-SELECT cube_inter('(1,2),(10,11)'::cube, '(10,11), (16,15)'::cube); -- point intersection
-SELECT cube_inter('(1,2,3)'::cube, '(1,2,3)'::cube); -- point args
-SELECT cube_inter('(1,2,3)'::cube, '(5,6,3)'::cube); -- point args
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (16,15)'::"cube"); -- intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,4), (6,5)'::"cube"); -- includes
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(13,14), (16,15)'::"cube"); -- no intersection
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(3,14), (16,15)'::"cube"); -- no intersection, but one dimension intersects
+SELECT cube_inter('(1,2),(10,11)'::"cube", '(10,11), (16,15)'::"cube"); -- point intersection
+SELECT cube_inter('(1,2,3)'::"cube", '(1,2,3)'::"cube"); -- point args
+SELECT cube_inter('(1,2,3)'::"cube", '(5,6,3)'::"cube"); -- point args
 
 -- Test of cube_size
 --
-SELECT cube_size('(4,8),(15,16)'::cube);
-SELECT cube_size('(42,137)'::cube);
+SELECT cube_size('(4,8),(15,16)'::"cube");
+SELECT cube_size('(42,137)'::"cube");
 
 -- Load some example data and build the index
 --
-CREATE TABLE test_cube (c cube);
+CREATE TABLE test_cube (c "cube");
 
 \copy test_cube from 'data/test_cube.data'
 
diff --git a/contrib/earthdistance/earthdistance--1.0.sql b/contrib/earthdistance/earthdistance--1.0.sql
index 4af9062..ad22f65 100644
--- a/contrib/earthdistance/earthdistance--1.0.sql
+++ b/contrib/earthdistance/earthdistance--1.0.sql
@@ -27,10 +27,10 @@ AS 'SELECT ''6378168''::float8';
 -- and that the point must be very near the surface of the sphere
 -- centered about the origin with the radius of the earth.
 
-CREATE DOMAIN earth AS cube
+CREATE DOMAIN earth AS "cube"
   CONSTRAINT not_point check(cube_is_point(value))
   CONSTRAINT not_3d check(cube_dim(value) <= 3)
-  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::cube) /
+  CONSTRAINT on_surface check(abs(cube_distance(value, '(0)'::"cube") /
   earth() - 1) < '10e-7'::float8);
 
 CREATE FUNCTION sec_to_gc(float8)
@@ -49,7 +49,7 @@ CREATE FUNCTION ll_to_earth(float8, float8)
 RETURNS earth
 LANGUAGE SQL
 IMMUTABLE STRICT
-AS 'SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
+AS 'SELECT "cube"("cube"("cube"(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth';
 
 CREATE FUNCTION latitude(earth)
 RETURNS float8
@@ -70,7 +70,7 @@ IMMUTABLE STRICT
 AS 'SELECT sec_to_gc(cube_distance($1, $2))';
 
 CREATE FUNCTION earth_box(earth, float8)
-RETURNS cube
+RETURNS "cube"
 LANGUAGE SQL
 IMMUTABLE STRICT
 AS 'SELECT cube_enlarge($1, gc_to_sec($2), 3)';
diff --git a/contrib/earthdistance/expected/earthdistance.out b/contrib/earthdistance/expected/earthdistance.out
index 9bd556f..f99276f 100644
--- a/contrib/earthdistance/expected/earthdistance.out
+++ b/contrib/earthdistance/expected/earthdistance.out
@@ -9,7 +9,7 @@
 --
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
 ERROR:  required extension "cube" is not installed
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 --
 -- The radius of the Earth we are using.
@@ -892,7 +892,7 @@ SELECT cube_dim(ll_to_earth(0,0)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -910,7 +910,7 @@ SELECT cube_dim(ll_to_earth(30,60)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -928,7 +928,7 @@ SELECT cube_dim(ll_to_earth(60,90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -946,7 +946,7 @@ SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
  t
 (1 row)
 
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
  ?column? 
 ----------
@@ -959,35 +959,35 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
                                               List of data types
- Schema | Name  |                                         Description                                         
---------+-------+---------------------------------------------------------------------------------------------
- public | cube  | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
- public | earth | 
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+ public | earth  | 
 (2 rows)
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 ERROR:  cannot drop extension cube because other objects depend on it
 DETAIL:  extension earthdistance depends on extension cube
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop extension earthdistance;
-drop type cube;  -- fail, extension cube requires it
-ERROR:  cannot drop type cube because extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
+ERROR:  cannot drop type "cube" because extension cube requires it
 HINT:  You can drop extension cube instead.
 -- list what's installed
 \dT
-                                             List of data types
- Schema | Name |                                         Description                                         
---------+------+---------------------------------------------------------------------------------------------
- public | cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                              List of data types
+ Schema |  Name  |                                         Description                                         
+--------+--------+---------------------------------------------------------------------------------------------
+ public | "cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 "cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type cube
+DETAIL:  table foo column f1 depends on type "cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop table foo;
-drop extension cube;
+drop extension "cube";
 -- list what's installed
 \dT
      List of data types
@@ -1008,7 +1008,7 @@ drop extension cube;
 (0 rows)
 
 create schema c;
-create extension cube with schema c;
+create extension "cube" with schema c;
 -- list what's installed
 \dT public.*
      List of data types
@@ -1029,23 +1029,23 @@ create extension cube with schema c;
 (0 rows)
 
 \dT c.*
-                                              List of data types
- Schema |  Name  |                                         Description                                         
---------+--------+---------------------------------------------------------------------------------------------
- c      | c.cube | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
+                                               List of data types
+ Schema |   Name   |                                         Description                                         
+--------+----------+---------------------------------------------------------------------------------------------
+ c      | c."cube" | multi-dimensional cube '(FLOAT-1, FLOAT-2, ..., FLOAT-N), (FLOAT-1, FLOAT-2, ..., FLOAT-N)'
 (1 row)
 
-create table foo (f1 c.cube, f2 int);
-drop extension cube;  -- fail, foo.f1 requires it
+create table foo (f1 c."cube", f2 int);
+drop extension "cube";  -- fail, foo.f1 requires it
 ERROR:  cannot drop extension cube because other objects depend on it
-DETAIL:  table foo column f1 depends on type c.cube
+DETAIL:  table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 drop schema c;  -- fail, cube requires it
 ERROR:  cannot drop schema c because other objects depend on it
 DETAIL:  extension cube depends on schema c
-table foo column f1 depends on type c.cube
+table foo column f1 depends on type c."cube"
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.
-drop extension cube cascade;
+drop extension "cube" cascade;
 NOTICE:  drop cascades to table foo column f1
 \d foo
       Table "public.foo"
diff --git a/contrib/earthdistance/sql/earthdistance.sql b/contrib/earthdistance/sql/earthdistance.sql
index 8604502..35dd9b8 100644
--- a/contrib/earthdistance/sql/earthdistance.sql
+++ b/contrib/earthdistance/sql/earthdistance.sql
@@ -9,7 +9,7 @@
 --
 
 CREATE EXTENSION earthdistance;  -- fail, must install cube first
-CREATE EXTENSION cube;
+CREATE EXTENSION "cube";
 CREATE EXTENSION earthdistance;
 
 --
@@ -284,19 +284,19 @@ SELECT earth_box(ll_to_earth(90,180),
 
 SELECT is_point(ll_to_earth(0,0));
 SELECT cube_dim(ll_to_earth(0,0)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(0,0), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(30,60));
 SELECT cube_dim(ll_to_earth(30,60)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(30,60), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(60,90));
 SELECT cube_dim(ll_to_earth(60,90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(60,90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 SELECT is_point(ll_to_earth(-30,-90));
 SELECT cube_dim(ll_to_earth(-30,-90)) <= 3;
-SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
+SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::"cube") / earth() - 1) <
        '10e-12'::float8;
 
 --
@@ -306,22 +306,22 @@ SELECT abs(cube_distance(ll_to_earth(-30,-90), '(0)'::cube) / earth() - 1) <
 -- list what's installed
 \dT
 
-drop extension cube;  -- fail, earthdistance requires it
+drop extension "cube";  -- fail, earthdistance requires it
 
 drop extension earthdistance;
 
-drop type cube;  -- fail, extension cube requires it
+drop type "cube";  -- fail, extension cube requires it
 
 -- list what's installed
 \dT
 
-create table foo (f1 cube, f2 int);
+create table foo (f1 "cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop table foo;
 
-drop extension cube;
+drop extension "cube";
 
 -- list what's installed
 \dT
@@ -330,7 +330,7 @@ drop extension cube;
 
 create schema c;
 
-create extension cube with schema c;
+create extension "cube" with schema c;
 
 -- list what's installed
 \dT public.*
@@ -338,13 +338,13 @@ create extension cube with schema c;
 \do public.*
 \dT c.*
 
-create table foo (f1 c.cube, f2 int);
+create table foo (f1 c."cube", f2 int);
 
-drop extension cube;  -- fail, foo.f1 requires it
+drop extension "cube";  -- fail, foo.f1 requires it
 
 drop schema c;  -- fail, cube requires it
 
-drop extension cube cascade;
+drop extension "cube" cascade;
 
 \d foo
 
gsp-u.patchtext/x-patchDownload
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 2aafa16..5511273 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -667,6 +667,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -677,7 +682,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -10035,6 +10040,12 @@ empty_grouping_set:
 				}
 		;
 
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
 rollup_clause:
 			ROLLUP '(' expr_list ')'
 				{
@@ -13156,6 +13167,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13303,6 +13315,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13394,7 +13407,6 @@ col_name_keyword:
 			| CHAR_P
 			| CHARACTER
 			| COALESCE
-			| CUBE
 			| DEC
 			| DECIMAL_P
 			| EXISTS
@@ -13417,7 +13429,6 @@ col_name_keyword:
 			| POSITION
 			| PRECISION
 			| REAL
-			| ROLLUP
 			| ROW
 			| SETOF
 			| SMALLINT
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 5344736..e170964 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -4888,12 +4888,13 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4902,8 +4903,27 @@ get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index fe42789..b2900a9 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,7 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
-PG_KEYWORD("cube", CUBE, COL_NAME_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -325,7 +325,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
-PG_KEYWORD("rollup", ROLLUP, COL_NAME_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
#80David Fetter
david@fetter.org
In reply to: Andrew Gierth (#79)
Re: Final Patch for GROUPING SETS

On Sat, Sep 27, 2014 at 06:37:38AM +0100, Andrew Gierth wrote:

"Andrew" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

Andrew> I was holding off on posting a recut patch with the latest
Andrew> EXPLAIN formatting changes (which are basically cosmetic)
Andrew> until it became clear whether RLS was likely to be reverted
Andrew> or kept (we have a tiny but irritating conflict with it, in
Andrew> the regression test schedule file where we both add to the
Andrew> same list of tests).

And here is that recut patch set.

Changes since last posting (other than conflict removal):

- gsp1.patch: clearer EXPLAIN output as per discussion

Recut patches:

gsp1.patch - phase 1 code patch (full syntax, limited functionality)
gsp2.patch - phase 2 code patch (adds full functionality using the
new chained aggregate mechanism)
gsp-doc.patch - docs
gsp-contrib.patch - quote "cube" in contrib/cube and contrib/earthdistance,
intended primarily for testing pending a decision on
renaming contrib/cube or unreserving keywords
gsp-u.patch - proposed method to unreserve CUBE and ROLLUP

(the contrib patch is not necessary if the -u patch is used; the
contrib/pg_stat_statements fixes are in the phase1 patch)

--
Andrew (irc:RhodiumToad)

Tom, any word on this?

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#81Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Gierth (#79)
Re: Final Patch for GROUPING SETS

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

And here is that recut patch set.

I started looking over this patch, but eventually decided that it needs
more work to be committable than I'm prepared to put in right now.

My single biggest complaint is about the introduction of struct
GroupedVar. If we stick with that, we're going to have to teach an
extremely large number of places that know about Vars to also know
about GroupedVars. This will result in code bloat and errors of
omission. If you think the latter concern is hypothetical, note that
you can't get 40 lines into gsp1.patch without finding such an omission,
namely the patch fails to teach pg_stat_statements.c about GroupedVars.
(That also points up that some of the errors of omission will be in
third-party code that we can't fix easily.)

I think you should get rid of that concept and instead implement the
behavior by having nodeAgg.c set the relevant fields of the representative
tuple slot to NULL, so that a regular Var does the right thing.

I'm also not happy about the quality of the internal documentation.
The big problem here is the seriously lacking documentation of the new
parse node types, eg

+/*
+ * Node representing substructure in GROUPING SETS
+ *
+ * This is not actually executable, but it's used in the raw parsetree
+ * representation of GROUP BY, and in the groupingSets field of Query, to
+ * preserve the original structure of rollup/cube clauses for readability
+ * rather than reducing everything to grouping sets.
+ */
+
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	Expr		xpr;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;

The only actual documentation there is a long-winded excuse for having
put the struct declaration in the wrong place. (Since it's not an
executable expression, it should be in parsenodes.h not primnodes.h.)
Good luck figuring out what "content" is a list of, or indeed anything
at all except that this has got something to do with grouping sets.
If one digs around in the patch long enough, some useful information can
be found in the header comments for various functions --- but there should
be a spec for what this struct means, what its fields are, what the
relevant invariants are *in the .h file*. Poking around in parsenodes.h,
eg the description of SortGroupClause, should give you an idea of the
standard here.

I'm not too happy about struct Grouping either. If one had to guess, one
would probably guess that this was part of the representation of a GROUP
BY clause; a guess led on by the practice of the patch of dealing with
this and struct GroupingSet together, as in eg pg_stat_statements.c and
nodes.h. Reading enough of the patch will eventually clue you that this
is the representation of a call of the GROUPING() pseudo-function, but
that's not exactly clear from either the name of the struct or its random
placement between Var and Const in primnodes.h. And the comment is oh so
helpful:

+/*
+ * Grouping
+ */

I'd be inclined to call it GroupingFunc and put it after
Aggref/WindowFunc. Also please note that there is an attempt throughout
the system to order code stanzas that deal with assorted node types in an
order matching the order in which they're declared in the *nodes.h files.
You should never be flipping a coin to decide where to add such code, and
"put it at the end of the existing list" is usually not the best answer
either.

Some other random examples of inadequate attention to commenting:

@@ -243,7 +243,7 @@ typedef struct AggStatePerAggData
* rest.
*/

-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstate;	/* sort object, if DISTINCT or ORDER BY */

This change didn't even bother to pluralize the comment, let alone explain
the length of the array or what it's indexed according to, let alone
explain why we now need multiple tuplesort objects in what is still
apparently a "per aggregate" state struct. (BTW, as a matter of good
engineering I think it's useful to change a field's name when you change
its meaning and representation so fundamentally. In this case, renaming
to "sortstates" would have been clearer and would have helped ensure that
you didn't miss fixing any referencing code.)

@@ -338,81 +339,101 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
static void
initialize_aggregates(AggState *aggstate,
AggStatePerAgg peragg,
- AggStatePerGroup pergroup)
+ AggStatePerGroup pergroup,
+ int numReinitialize)
{
int aggno;

I wonder what numReinitialize is, or why it's needed, or (having read
more code than I should have had to in order to guess at what it is)
why it is that only the first N sortstates need to be reset. The comments
at the call sites are no more enlightening.

I don't really have any comments on the algorithms yet, having spent too
much time trying to figure out underdocumented data structures to get to
the algorithms. However, noting the addition of list_intersection_int()
made me wonder whether you'd not be better off reducing the integer lists
to bitmapsets a lot sooner, perhaps even at parse analysis.
list_intersection_int() is going to be O(N^2) by nature. Maybe N can't
get large enough to matter in this context, but I do see places that
seem to be concerned about performance.

I've not spent any real effort looking at gsp2.patch yet, but it seems
even worse off comment-wise: if there's any explanation in there at all
of what a "chained aggregate" is, I didn't find it. I'd also counsel you
to find some other way to do it than putting bool chain_head fields in
Aggref nodes; that looks like a mess, eg, it will break equal() tests
for expression nodes that probably should still be seen as equal.

I took a quick look at gsp-u.patch. It seems like that approach should
work, with of course the caveat that using CUBE/ROLLUP as function names
in a GROUP BY list would be problematic. I'm not convinced by the
commentary in ruleutils.c suggesting that extra parentheses would help
disambiguate: aren't extra parentheses still going to contain grouping
specs according to the standard? Forcibly schema-qualifying such function
names seems like a less fragile answer on that end.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#82Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#81)
Re: Final Patch for GROUPING SETS

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

More comment on this later, but I want to highlight these specific
points since we need clear answers here to avoid wasting unnecessary
time and effort:

Tom> I've not spent any real effort looking at gsp2.patch yet, but it
Tom> seems even worse off comment-wise: if there's any explanation in
Tom> there at all of what a "chained aggregate" is, I didn't find it.

(Maybe "stacked" would have been a better term.)

What that code does is produce plans that look like this:

GroupAggregate
-> Sort
-> ChainAggregate
-> Sort
-> ChainAggregate

in much the same way that WindowAgg nodes are generated.

Where would you consider the best place to comment this? The WindowAgg
equivalent seems to be discussed primarily in the header comment of
nodeWindowAgg.c.

Tom> I'd also counsel you to find some other way to do it than
Tom> putting bool chain_head fields in Aggref nodes;

There are no chain_head fields in Aggref nodes.

Agg.chain_head is true for the Agg node at the top of the chain (the
GroupAggregate node in the above example), while AggState.chain_head
is set on the ChainAggregate nodes to point to the AggState of the
GroupAggregate node.

What we need to know before doing any further work on this is whether
this idea of stacking up aggregate and sort nodes is a viable one.

(The feedback I've had so far suggests that the performance is
acceptable, even if there are still optimization opportunities that
can be tackled later, like adding HashAggregate support.)

Tom> I took a quick look at gsp-u.patch. It seems like that approach
Tom> should work, with of course the caveat that using CUBE/ROLLUP as
Tom> function names in a GROUP BY list would be problematic. I'm not
Tom> convinced by the commentary in ruleutils.c suggesting that extra
Tom> parentheses would help disambiguate: aren't extra parentheses
Tom> still going to contain grouping specs according to the standard?

The spec is of minimal help here since it does not allow expressions in
GROUP BY at all, last I looked; only column references.

The extra parens do actually disambiguate because CUBE(x) and
(CUBE(x)) are not equivalent anywhere; while CUBE(x) can appear inside
GROUPING SETS (...), it cannot appear inside a (...) list nested inside
a GROUPING SETS list (or anywhere else).

As the comments in gram.y explain, the productions used are intended
to follow the spec with the exception of using a_expr where the spec
requires <ordinary grouping set>. So CUBE and ROLLUP are recognized as
special only as part of a group_by_item (<grouping element> in the
spec), and as soon as we see a paren that isn't part of the "GROUPING
SETS (" opener, we're forced into parsing an a_expr, in which CUBE()
would become a function call.

(The case of upgrading from an old pg version seems to require the use
of --quote-all-identifiers in pg_dump)

Tom> Forcibly schema-qualifying such function names seems like a less
Tom> fragile answer on that end.

That I guess would require keeping more state, unless you applied it
everywhere to any function with a keyword for a name? I dunno.

The question that needs deciding here is less whether the approach
_could_ work but whether we _want_ it. The objection has been made
that we are in effect introducing a new category of "unreserved almost
everywhere" keyword, which I think has a point; on the other hand,
reserving CUBE is a seriously painful prospect.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#83Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Gierth (#82)
Re: Final Patch for GROUPING SETS

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
Tom> I've not spent any real effort looking at gsp2.patch yet, but it
Tom> seems even worse off comment-wise: if there's any explanation in
Tom> there at all of what a "chained aggregate" is, I didn't find it.

(Maybe "stacked" would have been a better term.)

What that code does is produce plans that look like this:

GroupAggregate
-> Sort
-> ChainAggregate
-> Sort
-> ChainAggregate

in much the same way that WindowAgg nodes are generated.

That seems pretty messy, especially given your further comments that these
plan nodes are interconnected and know about each other (though you failed
to say exactly how). The claimed analogy to WindowAgg therefore seems
bogus since stacked WindowAggs are independent, AFAIR anyway. I'm also
wondering about performance: doesn't this imply more rows passing through
some of the plan steps than really necessary?

Also, how would this extend to preferring hashed aggregation in some of
the grouping steps?

ISTM that maybe what we should do is take a cue from the SQL spec, which
defines these things in terms of UNION ALL of plain-GROUP-BY operations
reading from a common CTE. Abstractly, that is, we'd have

Append
-> GroupAggregate
-> Sort
-> source data
-> GroupAggregate
-> Sort
-> source data
-> GroupAggregate
-> Sort
-> source data
...

(or some of the arms could be HashAgg without a sort). Then the question
is what exactly the aggregates are reading from. We could do worse than
make it a straight CTE, I suppose.

Tom> I'd also counsel you to find some other way to do it than
Tom> putting bool chain_head fields in Aggref nodes;

There are no chain_head fields in Aggref nodes.

Oh, I mistook "struct Agg" for "struct Aggref". (That's another pretty
poorly chosen struct name, though I suppose it's far too late to change
that choice.) Still, interconnecting plan nodes that aren't adjacent in
the plan tree doesn't sound like a great idea to me.

Tom> I took a quick look at gsp-u.patch. It seems like that approach
Tom> should work, with of course the caveat that using CUBE/ROLLUP as
Tom> function names in a GROUP BY list would be problematic. I'm not
Tom> convinced by the commentary in ruleutils.c suggesting that extra
Tom> parentheses would help disambiguate: aren't extra parentheses
Tom> still going to contain grouping specs according to the standard?

The extra parens do actually disambiguate because CUBE(x) and
(CUBE(x)) are not equivalent anywhere; while CUBE(x) can appear inside
GROUPING SETS (...), it cannot appear inside a (...) list nested inside
a GROUPING SETS list (or anywhere else).

Maybe, but this seems very fragile and non-future-proof. I think
double-quoting or schema-qualifying such function names would be safer
when you think about the use-case of dumping views that may get loaded
into future Postgres versions.

The question that needs deciding here is less whether the approach
_could_ work but whether we _want_ it. The objection has been made
that we are in effect introducing a new category of "unreserved almost
everywhere" keyword, which I think has a point;

True, but I think that ship has already sailed. We already have similar
behavior for PARTITION, RANGE, and ROWS (see the opt_existing_window_name
production), and I think PRECEDING, FOLLOWING, and UNBOUNDED are
effectively reserved-in-certain-very-specific-contexts as well. And there
are similar behaviors in plpgsql's parser.

on the other hand,
reserving CUBE is a seriously painful prospect.

Precisely. I think renaming or getting rid of contrib/cube would have
to be something done in a staged fashion over multiple release cycles.
Waiting several years to get GROUPING SETS doesn't seem appealing at all
compared to this alternative.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#84Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#83)
Re: Final Patch for GROUPING SETS

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

What that code does is produce plans that look like this:

GroupAggregate
-> Sort
-> ChainAggregate
-> Sort
-> ChainAggregate

in much the same way that WindowAgg nodes are generated.

Tom> That seems pretty messy, especially given your further comments
Tom> that these plan nodes are interconnected and know about each
Tom> other (though you failed to say exactly how).

I'd already explained in more detail way back when we posted the
patch. But to reiterate: the ChainAggregate nodes pass through their
input data unchanged, but on group boundaries they write aggregated
result rows to a tuplestore shared by the whole chain. The top node
returns the data from the tuplestore after its own output is
completed.

The chain_head pointer in the ChainAggregate nodes is used for:

- obtaining the head node's targetlist and qual, to use to project
rows into the tuplestore (the ChainAggregate nodes don't do
ordinary projection so they have dummy targetlists like the Sort
nodes do)

- obtaining the pointer to the tuplestore itself

- on rescan without parameter change, to inform the parent node
whether or not the child nodes are also being rescanned (since
the Sort nodes may or may not block this)

Tom> The claimed analogy to WindowAgg therefore seems bogus since
Tom> stacked WindowAggs are independent, AFAIR anyway.

The analogy is only in that they need to see the same input rows but
in different sort orders.

Tom> I'm also wondering about performance: doesn't this imply more
Tom> rows passing through some of the plan steps than really
Tom> necessary?

There's no way to cut down the number of rows seen by intermediate
nodes unless you implement (and require) associative aggregates, which
we do not do in this patch (that's left for possible future
optimization efforts). Our approach makes no new demands on the
implementation of aggregate functions.

Tom> Also, how would this extend to preferring hashed aggregation in
Tom> some of the grouping steps?

My suggestion for extending it to hashed aggs is: by having a (single)
HashAggregate node keep multiple hash tables, per grouping set, then
any arbitrary collection of grouping sets can be handled in one node
provided that memory permits and no non-hashable features are used.
So the normal plan for CUBE(a,b) under this scheme would be just:

HashAggregate
Grouping Sets: (), (a), (b), (a,b)
-> (input path in unsorted order)

If a mixture of hashable and non-hashable data types are used, for
example CUBE(hashable,unhashable), then a plan of this form could be
constructed:

HashAggregate
Grouping Sets: (), (hashable)
-> ChainAggregate
Grouping Sets: (unhashable), (unhashable,hashable)
-> (input path sorted by (unhashable,hashable))

Likewise, plans of this form could be considered for cases like
CUBE(low_card, high_card) where hashed grouping on high_card would
require excessive memory:

HashAggregate
Grouping Sets: (), (low_card)
-> ChainAggregate
Grouping Sets: (high_card), (high_card, low_card)
-> (input path sorted by (high_card, low_card))

Tom> ISTM that maybe what we should do is take a cue from the SQL
Tom> spec, which defines these things in terms of UNION ALL of
Tom> plain-GROUP-BY operations reading from a common CTE.

I looked at that, in fact that was our original plan, but it became
clear quite quickly that it was not going to be easy.

I tried two different approaches. First was to actually re-plan the
input (i.e. running query_planner more than once) for different sort
orders; that crashed and burned quickly thanks to the extent to which
the planner assumes that it'll be run once only on any given input.

Second was to generate a CTE for the input data. This didn't get very
far because everything that already exists to handle CTE nodes assumes
that they are explicit in the planner's input (that they have their
own Query node, etc.) and I was not able to determine a good solution.
If you have any suggestions for how to approach the problem, I'm happy
to have another go at it.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#85Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Gierth (#84)
Re: Final Patch for GROUPING SETS

Andrew Gierth <andrew@tao11.riddles.org.uk> writes:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
Tom> That seems pretty messy, especially given your further comments
Tom> that these plan nodes are interconnected and know about each
Tom> other (though you failed to say exactly how).

I'd already explained in more detail way back when we posted the
patch. But to reiterate: the ChainAggregate nodes pass through their
input data unchanged, but on group boundaries they write aggregated
result rows to a tuplestore shared by the whole chain. The top node
returns the data from the tuplestore after its own output is
completed.

That seems pretty grotty from a performance+memory consumption standpoint.
At peak memory usage, each one of the Sort nodes will contain every input
row, and the shared tuplestore will contain every output row. That will
lead to either a lot of memory eaten, or a lot of temp-file I/O, depending
on how big work_mem is.

In principle, with the CTE+UNION approach I was suggesting, the peak
memory consumption would be one copy of the input rows in the CTE's
tuplestore plus one copy in the active branch's Sort node. I think a
bit of effort would be needed to get there (ie, shut down one branch's
Sort node before starting the next, something I'm pretty sure doesn't
happen today). But it's doable whereas I don't see how we dodge the
multiple-active-sorts problem with the chained implementation.

Tom> ISTM that maybe what we should do is take a cue from the SQL
Tom> spec, which defines these things in terms of UNION ALL of
Tom> plain-GROUP-BY operations reading from a common CTE.

I looked at that, in fact that was our original plan, but it became
clear quite quickly that it was not going to be easy.

I tried two different approaches. First was to actually re-plan the
input (i.e. running query_planner more than once) for different sort
orders; that crashed and burned quickly thanks to the extent to which
the planner assumes that it'll be run once only on any given input.

Well, we'd not want to rescan the input multiple times, so I don't think
that generating independent plan trees for each sort order would be the
thing to do anyway. I suppose ideally it would be nice to check the costs
of getting the different sort orders, so that the one Sort we elide is the
one that gets the best cost savings. But the WindowAgg code isn't that
smart either and no one's really complained, so I think this can wait.
(Eventually I'd like to make such cost comparisons possible as part of the
upper-planner Pathification that I keep nattering about. But it doesn't
seem like a prerequisite for getting GROUPING SETS in.)

Second was to generate a CTE for the input data. This didn't get very
far because everything that already exists to handle CTE nodes assumes
that they are explicit in the planner's input (that they have their
own Query node, etc.) and I was not able to determine a good solution.

Seems like restructuring that wouldn't be *that* hard. We probably don't
want it to be completely like a CTE for planning purposes anyway --- that
would foreclose passing down any knowledge of desired sort order, which
we don't want. But it seems like we could stick a variant of CtePath
atop the chosen result path of the scan/join planning phase. If you like
I can poke into this a bit.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#86Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#85)
Re: Final Patch for GROUPING SETS

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

I'd already explained in more detail way back when we posted the
patch. But to reiterate: the ChainAggregate nodes pass through
their input data unchanged, but on group boundaries they write
aggregated result rows to a tuplestore shared by the whole
chain. The top node returns the data from the tuplestore after its
own output is completed.

Tom> That seems pretty grotty from a performance+memory consumption
Tom> standpoint. At peak memory usage, each one of the Sort nodes
Tom> will contain every input row,

Has this objection ever been raised for WindowAgg, which has the same
issue?

Tom> and the shared tuplestore will contain every output row.

Every output row except those produced by the top node, and since this
is after grouping, that's expected to be smaller than the input.

Tom> That will lead to either a lot of memory eaten, or a lot of
Tom> temp-file I/O, depending on how big work_mem is.

Yes. Though note that this code only kicks in when dealing with
grouping sets more complex than a simple rollup. A CUBE of two
dimensions uses only one Sort node above whatever is needed to produce
sorted input, and a CUBE of three dimensions uses only two. (It does
increase quite a lot for large cubes though.)

Tom> In principle, with the CTE+UNION approach I was suggesting, the
Tom> peak memory consumption would be one copy of the input rows in
Tom> the CTE's tuplestore plus one copy in the active branch's Sort
Tom> node. I think a bit of effort would be needed to get there (ie,
Tom> shut down one branch's Sort node before starting the next,
Tom> something I'm pretty sure doesn't happen today).

Correct, it doesn't.

However, I notice that having ChainAggregate shut down its input would
also have the effect of bounding the memory usage (to two copies,
which is as good as the append+sorts+CTE case).

Is shutting down and reinitializing parts of the plan really feasible
here? Or would it be a case of forcing a rescan?

Second was to generate a CTE for the input data. This didn't get
very far because everything that already exists to handle CTE
nodes assumes that they are explicit in the planner's input (that
they have their own Query node, etc.) and I was not able to
determine a good solution.

Tom> Seems like restructuring that wouldn't be *that* hard. We
Tom> probably don't want it to be completely like a CTE for planning
Tom> purposes anyway --- that would foreclose passing down any
Tom> knowledge of desired sort order, which we don't want. But it
Tom> seems like we could stick a variant of CtePath atop the chosen
Tom> result path of the scan/join planning phase. If you like I can
Tom> poke into this a bit.

Please do.

That seems to cover the high-priority issues from our point of view.

We will continue working on the other issues, on the assumption that
when we have some idea how to do it your way, we will rip out the
ChainAggregate stuff in favour of an Append-based solution.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#87Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#81)
Re: Final Patch for GROUPING SETS

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

With the high-priority questions out of the way, time to tackle the
rest:

Tom> My single biggest complaint is about the introduction of struct
Tom> GroupedVar. If we stick with that, we're going to have to teach
Tom> an extremely large number of places that know about Vars to also
Tom> know about GroupedVars. This will result in code bloat and
Tom> errors of omission. If you think the latter concern is
Tom> hypothetical, note that you can't get 40 lines into gsp1.patch
Tom> without finding such an omission, namely the patch fails to
Tom> teach pg_stat_statements.c about GroupedVars. (That also points
Tom> up that some of the errors of omission will be in third-party
Tom> code that we can't fix easily.)

Except that GroupedVar is created only late in planning, and so only a
small proportion of places need to know about it (and certainly
pg_stat_statements does not). It also can't end up attached to any
foreign scan or otherwise potentially third-party plan node.

Tom> I think you should get rid of that concept and instead implement
Tom> the behavior by having nodeAgg.c set the relevant fields of the
Tom> representative tuple slot to NULL, so that a regular Var does
Tom> the right thing.

We did consider that. Messing with the null flags of the slot didn't
seem like an especially clean approach. But if that's how you want
it...

Tom> I don't really have any comments on the algorithms yet, having
Tom> spent too much time trying to figure out underdocumented data
Tom> structures to get to the algorithms. However, noting the
Tom> addition of list_intersection_int() made me wonder whether you'd
Tom> not be better off reducing the integer lists to bitmapsets a lot
Tom> sooner, perhaps even at parse analysis.

list_intersection_int should not be time-critical; common queries do
not call it at all (simple cube or rollup clauses always have an empty
grouping set, causing the intersection test to bail immediately), and
in pathological worst-case constructions like putting a dozen
individually grouped columns in front of a 12-d cube (thus calling it
4096 times on lists at least 12 nodes long) it doesn't account for
more than a small percentage even with optimization off and debugging
and asserts on.

The code uses the list representation almost everywhere in parsing and
planning because in some places the order of elements matters, and I
didn't want to keep swapping between a bitmap and a list
representation.

(We _do_ use bitmapsets where we're potentially going to be doing an
O(N^2) number of subset comparisons to build the graph adjacency
list for computing the minimal set of sort operations, and at
execution time.)

I didn't even consider using bitmaps for the output of parse analysis
because at that stage we want to preserve most of the original query
substructure (otherwise view deparse won't look anything like the
original query did).

Tom> list_intersection_int() is going to be O(N^2) by nature. Maybe
Tom> N can't get large enough to matter in this context, but I do see
Tom> places that seem to be concerned about performance.

My main feeling on performance is that simple cube and rollup clauses
or short lists of grouping sets should parse and plan very quickly;
more complex cases should parse and plan fast enough that execution
time on any nontrivial input will swamp the parse/plan time; and the
most complex cases that aren't outright rejected should plan in no
more than a few seconds extra. (We're limiting to 4096 grouping sets
in any query level, which is comparable to other databases and seems
quite excessively high compared to what people are actually likely to
need.)

(don't be fooled by the excessive EXPLAIN time on some queries. There
are performance issues in EXPLAIN output generation that have nothing
to do with this patch, and which I've not pinned down.)

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#88Michael Paquier
michael.paquier@gmail.com
In reply to: Tom Lane (#81)
Re: Final Patch for GROUPING SETS

On Thu, Dec 11, 2014 at 3:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I don't really have any comments on the algorithms yet, having spent too
much time trying to figure out underdocumented data structures to get to
the algorithms. However, noting the addition of list_intersection_int()
made me wonder whether you'd not be better off reducing the integer lists
to bitmapsets a lot sooner, perhaps even at parse analysis.
list_intersection_int() is going to be O(N^2) by nature. Maybe N can't
get large enough to matter in this context, but I do see places that
seem to be concerned about performance.

I've not spent any real effort looking at gsp2.patch yet, but it seems
even worse off comment-wise: if there's any explanation in there at all
of what a "chained aggregate" is, I didn't find it. I'd also counsel you
to find some other way to do it than putting bool chain_head fields in
Aggref nodes; that looks like a mess, eg, it will break equal() tests
for expression nodes that probably should still be seen as equal.

I took a quick look at gsp-u.patch. It seems like that approach should
work, with of course the caveat that using CUBE/ROLLUP as function names
in a GROUP BY list would be problematic. I'm not convinced by the
commentary in ruleutils.c suggesting that extra parentheses would help
disambiguate: aren't extra parentheses still going to contain grouping
specs according to the standard? Forcibly schema-qualifying such function
names seems like a less fragile answer on that end.

Based on those comments, I am marking this patch as "Returned with
Feedback" on the CF app for 2014-10. Andrew, feel free to move this
entry to CF 2014-12 if you are planning to continue working on it so
as it would get additional review. (Note that this patch status was
"Waiting on Author" when writing this text).
Regards,
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#89Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Michael Paquier (#88)
Re: Final Patch for GROUPING SETS

"Michael" == Michael Paquier <michael.paquier@gmail.com> writes:

Michael> Based on those comments, I am marking this patch as
Michael> "Returned with Feedback" on the CF app for 2014-10. Andrew,
Michael> feel free to move this entry to CF 2014-12 if you are
Michael> planning to continue working on it so as it would get
Michael> additional review. (Note that this patch status was "Waiting
Michael> on Author" when writing this text).

Moved it to 2014-12 and set it back to "waiting on author". We expect to
submit a revised version, though I have no timescale yet.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#90Michael Paquier
michael.paquier@gmail.com
In reply to: Andrew Gierth (#89)
Re: Final Patch for GROUPING SETS

On Mon, Dec 15, 2014 at 12:28 PM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

"Michael" == Michael Paquier <michael.paquier@gmail.com> writes:

Michael> Based on those comments, I am marking this patch as
Michael> "Returned with Feedback" on the CF app for 2014-10. Andrew,
Michael> feel free to move this entry to CF 2014-12 if you are
Michael> planning to continue working on it so as it would get
Michael> additional review. (Note that this patch status was "Waiting
Michael> on Author" when writing this text).

Moved it to 2014-12 and set it back to "waiting on author". We expect to
submit a revised version, though I have no timescale yet.

OK thanks for the update.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#91Noah Misch
noah@leadboat.com
In reply to: Andrew Gierth (#86)
Re: Final Patch for GROUPING SETS

On Sat, Dec 13, 2014 at 04:37:48AM +0000, Andrew Gierth wrote:

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

I'd already explained in more detail way back when we posted the
patch. But to reiterate: the ChainAggregate nodes pass through
their input data unchanged, but on group boundaries they write
aggregated result rows to a tuplestore shared by the whole
chain. The top node returns the data from the tuplestore after its
own output is completed.

Tom> That seems pretty grotty from a performance+memory consumption
Tom> standpoint. At peak memory usage, each one of the Sort nodes
Tom> will contain every input row,

Has this objection ever been raised for WindowAgg, which has the same
issue?

I caution against using window function performance as the template for
GROUPING SETS performance goals. The benefit of GROUPING SETS compared to its
UNION ALL functional equivalent is 15% syntactic pleasantness, 85% performance
opportunities. Contrast that having window functions is great even with naive
performance, because they enable tasks that are otherwise too hard in SQL.

Tom> In principle, with the CTE+UNION approach I was suggesting, the
Tom> peak memory consumption would be one copy of the input rows in
Tom> the CTE's tuplestore plus one copy in the active branch's Sort
Tom> node. I think a bit of effort would be needed to get there (ie,
Tom> shut down one branch's Sort node before starting the next,
Tom> something I'm pretty sure doesn't happen today).

Correct, it doesn't.

However, I notice that having ChainAggregate shut down its input would
also have the effect of bounding the memory usage (to two copies,
which is as good as the append+sorts+CTE case).

Agreed, and I find that more promising than the CTE approach. Both strategies
require temporary space covering two copies of the input data. (That, or you
accept rescanning the original input.) The chained approach performs less
I/O. Consider "SELECT count(*) FROM t GROUP BY GROUPING SETS (a, b)", where
pg_relation_size(t) >> RAM. I/O consumed with the chained approach:

read table
write tuplesort 1
read tuplesort 1
write tuplesort 2
read tuplesort 2

I/O consumed with the CTE approach:

read table
write CTE
read CTE
write tuplesort 1
read tuplesort 1
read CTE
write tuplesort 2
read tuplesort 2

Tom rightly brought up the space requirements for result rows. The CTE
approach naturally avoids reserving space for that. However, I find it a safe
bet to optimize GROUPING SETS for input >> result. Reserving temporary space
for result rows to save input data I/O is a good trade. We don't actually
need to compromise; one can imagine a GroupAggregateChain plan node with a
sortChain list that exhibits the efficiencies of both. I'm fine moving
forward with the cross-node tuplestore, though.

The elephant in the performance room is the absence of hash aggregation. I
agree with your decision to make that a follow-on patch, but the project would
be in an awkward PR situation if 9.5 has GroupAggregate-only GROUPING SETS. I
may argue to #ifdef-out the feature rather than release that way. We don't
need to debate that prematurely, but keep it in mind while planning.

Thanks,
nm

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#92Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#91)
Re: Final Patch for GROUPING SETS

Noah Misch <noah@leadboat.com> writes:

On Sat, Dec 13, 2014 at 04:37:48AM +0000, Andrew Gierth wrote:
"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

Tom> That seems pretty grotty from a performance+memory consumption
Tom> standpoint. At peak memory usage, each one of the Sort nodes
Tom> will contain every input row,

Has this objection ever been raised for WindowAgg, which has the same
issue?

I caution against using window function performance as the template for
GROUPING SETS performance goals. The benefit of GROUPING SETS compared to its
UNION ALL functional equivalent is 15% syntactic pleasantness, 85% performance
opportunities. Contrast that having window functions is great even with naive
performance, because they enable tasks that are otherwise too hard in SQL.

The other reason that's a bad comparison is that I've not seen many
queries that use more than a couple of window frames, whereas we have
to expect that the number of grouping sets in typical queries will be
significantly more than "a couple". So we do have to think about what
the performance will be like with a lot of sort steps. I'm also worried
that this use-case may finally force us to do something about the "one
work_mem per sort node" behavior, unless we can hack things so that only
one or two sorts reach max memory consumption concurrently.

I still find the ChainAggregate approach too ugly at a system structural
level to accept, regardless of Noah's argument about number of I/O cycles
consumed. We'll be paying for that in complexity and bugs into the
indefinite future, and I wonder if it isn't going to foreclose some other
"performance opportunities" as well.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#93Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Tom Lane (#92)
Re: Final Patch for GROUPING SETS

"Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:

[Noah]

I caution against using window function performance as the
template for GROUPING SETS performance goals. The benefit of
GROUPING SETS compared to its UNION ALL functional equivalent is
15% syntactic pleasantness, 85% performance opportunities.
Contrast that having window functions is great even with naive
performance, because they enable tasks that are otherwise too hard
in SQL.

Yes, this is a reasonable point.

Tom> The other reason that's a bad comparison is that I've not seen
Tom> many queries that use more than a couple of window frames,
Tom> whereas we have to expect that the number of grouping sets in
Tom> typical queries will be significantly more than "a couple".

I would be interested in seeing more good examples of the size and
type of grouping sets used in typical queries.

Tom> So we do have to think about what the performance will be like
Tom> with a lot of sort steps. I'm also worried that this use-case
Tom> may finally force us to do something about the "one work_mem per
Tom> sort node" behavior, unless we can hack things so that only one
Tom> or two sorts reach max memory consumption concurrently.

Modifying ChainAggregate so that only two sorts reach max memory
consumption concurrently seems to have been quite simple to implement,
though I'm still testing some aspects of it.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#94Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Gierth (#93)
Re: Final Patch for GROUPING SETS

On Mon, Dec 22, 2014 at 11:19 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

Tom> The other reason that's a bad comparison is that I've not seen
Tom> many queries that use more than a couple of window frames,
Tom> whereas we have to expect that the number of grouping sets in
Tom> typical queries will be significantly more than "a couple".

I would be interested in seeing more good examples of the size and
type of grouping sets used in typical queries.

From what I have seen, there is interest in being able to do things
like GROUP BY CUBE(a, b, c, d) and have that be efficient. That will
require 16 different groupings, and we really want to minimize the
number of times we have to re-sort to get all of those done. For
example, if we start by sorting on (a, b, c, d), we want to then make
a single pass over the data computing the aggregates with (a, b, c,
d), (a, b, c), (a, b), (a), and () as the grouping columns.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#95Atri Sharma
atri.jiit@gmail.com
In reply to: Robert Haas (#94)
Re: Final Patch for GROUPING SETS

On Tuesday, December 23, 2014, Robert Haas <robertmhaas@gmail.com> wrote:

On Mon, Dec 22, 2014 at 11:19 AM, Andrew Gierth
<andrew@tao11.riddles.org.uk <javascript:;>> wrote:

Tom> The other reason that's a bad comparison is that I've not seen
Tom> many queries that use more than a couple of window frames,
Tom> whereas we have to expect that the number of grouping sets in
Tom> typical queries will be significantly more than "a couple".

I would be interested in seeing more good examples of the size and
type of grouping sets used in typical queries.

From what I have seen, there is interest in being able to do things
like GROUP BY CUBE(a, b, c, d) and have that be efficient. That will
require 16 different groupings, and we really want to minimize the
number of times we have to re-sort to get all of those done. For
example, if we start by sorting on (a, b, c, d), we want to then make
a single pass over the data computing the aggregates with (a, b, c,
d), (a, b, c), (a, b), (a), and () as the grouping columns.

That is what ChainAggregate node does exactly. A set of orders that fit in
a single ROLLUP list (like your example) are processed in a single go.

--
Regards,

Atri
*l'apprenant*

#96Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Robert Haas (#94)
Re: Final Patch for GROUPING SETS

"Robert" == Robert Haas <robertmhaas@gmail.com> writes:

I would be interested in seeing more good examples of the size and
type of grouping sets used in typical queries.

Robert> From what I have seen, there is interest in being able to do
Robert> things like GROUP BY CUBE(a, b, c, d) and have that be
Robert> efficient.

Yes, but that's not telling me anything I didn't already know.

What I'm curious about is things like:

- what's the largest cube(...) people actually make use of in practice

- do people make much use of the ability to mix cube and rollup, or
take the cross product of multiple grouping sets

- what's the most complex GROUPING SETS clause anyone has seen in
common use

Robert> That will require 16 different groupings, and we really want
Robert> to minimize the number of times we have to re-sort to get all
Robert> of those done. For example, if we start by sorting on (a, b,
Robert> c, d), we want to then make a single pass over the data
Robert> computing the aggregates with (a, b, c, d), (a, b, c), (a,
Robert> b), (a), and () as the grouping columns.

In the case of cube(a,b,c,d), our code currently gives:

b,d,a,c: (b,d,a,c),(b,d)
a,b,d: (a,b,d),(a,b)
d,a,c: (d,a,c),(d,a),(d)
c,d: (c,d),(c)
b,c,d: (b,c,d),(b,c),(b)
a,c,b: (a,c,b),(a,c),(a),()

There is no solution in less than 6 sorts. (There are many possible
solutions in 6 sorts, but we don't attempt to prefer one over
another. The minimum number of sorts for a cube of N dimensions is
obviously N! / (r! * (N-r)!) where r = floor(N/2).)

If you want the theory: the set of grouping sets is a poset ordered by
set inclusion; what we want is a minimal partition of this poset into
chains (since any chain can be processed in one pass), which happens
to be equivalent to the problem of maximum cardinality matching in a
bipartite graph, which we solve in polynomial time with the
Hopcroft-Karp algorithm. This guarantees us a minimal solution for
any combination of grouping sets however specified, not just for
cubes.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#97Noah Misch
noah@leadboat.com
In reply to: Tom Lane (#92)
Re: Final Patch for GROUPING SETS

On Mon, Dec 22, 2014 at 10:46:16AM -0500, Tom Lane wrote:

I still find the ChainAggregate approach too ugly at a system structural
level to accept, regardless of Noah's argument about number of I/O cycles
consumed. We'll be paying for that in complexity and bugs into the
indefinite future, and I wonder if it isn't going to foreclose some other
"performance opportunities" as well.

Among GROUPING SETS GroupAggregate implementations, I bet there's a nonempty
intersection between those having maintainable design and those having optimal
I/O usage, optimal memory usage, and optimal number of sorts. Let's put more
effort into finding it. I'm hearing that the shared tuplestore is
ChainAggregate's principal threat to system structure; is that right?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#98Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Gierth (#96)
Re: Final Patch for GROUPING SETS

On Mon, Dec 22, 2014 at 6:57 PM, Andrew Gierth
<andrew@tao11.riddles.org.uk> wrote:

In the case of cube(a,b,c,d), our code currently gives:

b,d,a,c: (b,d,a,c),(b,d)
a,b,d: (a,b,d),(a,b)
d,a,c: (d,a,c),(d,a),(d)
c,d: (c,d),(c)
b,c,d: (b,c,d),(b,c),(b)
a,c,b: (a,c,b),(a,c),(a),()

That's pretty cool.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#99Noah Misch
noah@leadboat.com
In reply to: Noah Misch (#97)
Re: Final Patch for GROUPING SETS

On Tue, Dec 23, 2014 at 02:29:58AM -0500, Noah Misch wrote:

On Mon, Dec 22, 2014 at 10:46:16AM -0500, Tom Lane wrote:

I still find the ChainAggregate approach too ugly at a system structural
level to accept, regardless of Noah's argument about number of I/O cycles
consumed. We'll be paying for that in complexity and bugs into the
indefinite future, and I wonder if it isn't going to foreclose some other
"performance opportunities" as well.

Among GROUPING SETS GroupAggregate implementations, I bet there's a nonempty
intersection between those having maintainable design and those having optimal
I/O usage, optimal memory usage, and optimal number of sorts. Let's put more
effort into finding it. I'm hearing that the shared tuplestore is
ChainAggregate's principal threat to system structure; is that right?

The underlying algorithm, if naively expressed in terms of our executor
concepts, would call ExecProcNode() on a SortState, then feed the resulting
slot to both a GroupAggregate and to another Sort. That implies a non-tree
graph of executor nodes, which isn't going to fly anytime soon. The CTE
approach bypasses that problem by eliminating cooperation between sorts,
instead reading 2N+1 copies of the source data for N sorts. ChainAggregate is
a bit like a node having two parents, a Sort and a GroupAggregate. However,
the graph edge between ChainAggregate and its GroupAggregate is a tuplestore
instead of the usual, synchronous ExecProcNode().

Suppose one node orchestrated all sorting and aggregation. Call it a
MultiGroupAggregate for now. It wouldn't harness Sort nodes, because it
performs aggregation between tuplesort_puttupleslot() calls. Instead, it
would directly manage two Tuplesortstate, CUR and NEXT. The node would have
an initial phase similar to ExecSort(), in which it drains the outer node to
populate the first CUR. After that, it looks more like agg_retrieve_direct(),
except that CUR is the input source, and each tuple drawn is also put into
NEXT. When done with one CUR, swap CUR with NEXT and reinitialize NEXT. This
design does not add I/O consumption or require a nonstandard communication
channel between executor nodes. Tom, Andrew, does that look satisfactory?

Thanks,
nm

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#100Atri Sharma
atri.jiit@gmail.com
In reply to: Noah Misch (#99)
Re: Final Patch for GROUPING SETS

ChainAggregate is

a bit like a node having two parents, a Sort and a GroupAggregate.
However,
the graph edge between ChainAggregate and its GroupAggregate is a
tuplestore
instead of the usual, synchronous ExecProcNode().

Well, I dont buy the two parents theory. The Sort nodes are intermediately
stacked amongst ChainAggregate nodes, so there is still the single edge.
However, as you rightly said, there is a shared tuplestore, but note that
only the head of chain ChainAggregate has the top GroupAggregate as its
parent.

Suppose one node orchestrated all sorting and aggregation. Call it a
MultiGroupAggregate for now. It wouldn't harness Sort nodes, because it
performs aggregation between tuplesort_puttupleslot() calls. Instead, it
would directly manage two Tuplesortstate, CUR and NEXT. The node would
have
an initial phase similar to ExecSort(), in which it drains the outer node
to
populate the first CUR. After that, it looks more like
agg_retrieve_direct(),
except that CUR is the input source, and each tuple drawn is also put into
NEXT. When done with one CUR, swap CUR with NEXT and reinitialize NEXT.
This
design does not add I/O consumption or require a nonstandard communication
channel between executor nodes. Tom, Andrew, does that look satisfactory?

So you are essentially proposing merging ChainAggregate and its
corresponding Sort node?

So the structure would be something like:

GroupAggregate
--> MultiGroupAgg (a,b)
----> MultiGroupAgg (c,d) ...

I am not sure if I understand you correctly. Only the top level
GroupAggregate node projects the result of the entire operation. The key to
ChainAggregate nodes is that each ChainAggregate node handles grouping sets
that fit a single ROLLUP list i.e. can be done by a single sort order.
There can be multiple lists of this type in a single GS operation, however,
our current design has only a single top GroupAggregate node but a
ChainAggregate node + Sort node per sort order. If you are proposing
replacing GroupAggregate node + entire ChainAggregate + Sort nodes stack
with a single MultiGroupAggregate node, I am not able to understand how it
will handle all the multiple sort orders. If you are proposing replacing
only ChainAggregate + Sort node with a single MultiGroupAgg node, that
still shares the tuplestore with top level GroupAggregate node.

I am pretty sure I have messed up my understanding of your proposal. Please
correct me if I am wrong.

Regards,

Atri

--
Regards,

Atri
*l'apprenant*

#101Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Noah Misch (#99)
Re: Final Patch for GROUPING SETS

"Noah" == Noah Misch <noah@leadboat.com> writes:

Noah> Suppose one node orchestrated all sorting and aggregation.

Well, that has the downside of making it into an opaque blob, without
actually gaining much.

Noah> Call it a MultiGroupAggregate for now. It wouldn't harness
Noah> Sort nodes, because it performs aggregation between
Noah> tuplesort_puttupleslot() calls. Instead, it would directly
Noah> manage two Tuplesortstate, CUR and NEXT. The node would have
Noah> an initial phase similar to ExecSort(), in which it drains the
Noah> outer node to populate the first CUR. After that, it looks
Noah> more like agg_retrieve_direct(),

agg_retrieve_direct is already complex enough, and this would be
substantially more so, as compared to agg_retrieve_chained which is
substantially simpler.

A more serious objection is that this forecloses (or at least makes
much more complex) the future possibility of doing some grouping sets
by sorting and others by hashing. The chained approach specifically
allows for the future possibility of using a HashAggregate as the
chain head, so that for example cube(a,b) can be implemented as a
sorted agg for (a,b) and (a) and a hashed agg for (b) and (), allowing
it to be done with one sort even if the result size for (a,b) is too
big to hash.

Noah> Tom, Andrew, does that look satisfactory?

Not to me.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#102Noah Misch
noah@leadboat.com
In reply to: Atri Sharma (#100)
Re: Final Patch for GROUPING SETS

On Wed, Dec 31, 2014 at 02:45:29PM +0530, Atri Sharma wrote:

Suppose one node orchestrated all sorting and aggregation. Call it a
MultiGroupAggregate for now. It wouldn't harness Sort nodes, because it
performs aggregation between tuplesort_puttupleslot() calls. Instead, it
would directly manage two Tuplesortstate, CUR and NEXT. The node would
have
an initial phase similar to ExecSort(), in which it drains the outer node
to
populate the first CUR. After that, it looks more like
agg_retrieve_direct(),
except that CUR is the input source, and each tuple drawn is also put into
NEXT. When done with one CUR, swap CUR with NEXT and reinitialize NEXT.
This
design does not add I/O consumption or require a nonstandard communication
channel between executor nodes. Tom, Andrew, does that look satisfactory?

So you are essentially proposing merging ChainAggregate and its
corresponding Sort node?

So the structure would be something like:

GroupAggregate
--> MultiGroupAgg (a,b)
----> MultiGroupAgg (c,d) ...

No.

If you are proposing
replacing GroupAggregate node + entire ChainAggregate + Sort nodes stack
with a single MultiGroupAggregate node, I am not able to understand how it
will handle all the multiple sort orders.

Yes, I was proposing that. My paragraph that you quoted above was the attempt
to explain how the node would manage multiple sort orders. If you have
specific questions about it, feel free to ask.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#103Noah Misch
noah@leadboat.com
In reply to: Andrew Gierth (#101)
Re: Final Patch for GROUPING SETS

On Wed, Dec 31, 2014 at 05:33:43PM +0000, Andrew Gierth wrote:

"Noah" == Noah Misch <noah@leadboat.com> writes:

Noah> Suppose one node orchestrated all sorting and aggregation.

Well, that has the downside of making it into an opaque blob, without
actually gaining much.

The opaque-blob criticism is valid. As for not gaining much, well, the gain I
sought was to break this stalemate. You and Tom have expressed willingness to
accept the read I/O multiplier of the CTE approach. You and I are willing to
swallow an architecture disruption, namely a tuplestore acting as a side
channel between executor nodes. Given your NACK, I agree that it fails to
move us toward consensus and therefore does not gain much. Alas.

A more serious objection is that this forecloses (or at least makes
much more complex) the future possibility of doing some grouping sets
by sorting and others by hashing. The chained approach specifically
allows for the future possibility of using a HashAggregate as the
chain head, so that for example cube(a,b) can be implemented as a
sorted agg for (a,b) and (a) and a hashed agg for (b) and (), allowing
it to be done with one sort even if the result size for (a,b) is too
big to hash.

That's a fair criticism, too. Ingesting nodeSort.c into nodeAgg.c wouldn't be
too bad, because nodeSort.c is a thin wrapper around tuplesort.c. Ingesting
nodeHash.c is not so tidy; that could entail extracting a module similar in
level to tuplesort.c, to be consumed by both executor nodes. This does raise
the good point that the GROUPING SETS _design_ ought to consider group and
hash aggregation together. Designing one in isolation carries too high of a
risk of painting the other into a corner.

Thanks,
nm

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#104Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Noah Misch (#103)
Re: Final Patch for GROUPING SETS

On 12/31/14, 3:05 PM, Noah Misch wrote:

On Wed, Dec 31, 2014 at 05:33:43PM +0000, Andrew Gierth wrote:

"Noah" == Noah Misch<noah@leadboat.com> writes:

Noah> Suppose one node orchestrated all sorting and aggregation.

Well, that has the downside of making it into an opaque blob, without
actually gaining much.

The opaque-blob criticism is valid. As for not gaining much, well, the gain I
sought was to break this stalemate. You and Tom have expressed willingness to
accept the read I/O multiplier of the CTE approach. You and I are willing to
swallow an architecture disruption, namely a tuplestore acting as a side
channel between executor nodes. Given your NACK, I agree that it fails to
move us toward consensus and therefore does not gain much. Alas.

I haven't read the full discussion in depth, but is what we'd want here is the ability to feed tuples to more than one node simultaneously? That would allow things like

GroupAggregate
--> Sort(a) \
------------+--> Sort(a,b) -\
--> Hash(b) ----------------+
\--> SeqScan

That would allow the planner to trade off things like total memory consumption vs IO.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#105Noah Misch
noah@leadboat.com
In reply to: Jim Nasby (#104)
Re: Final Patch for GROUPING SETS

On Fri, Jan 02, 2015 at 03:55:23PM -0600, Jim Nasby wrote:

On 12/31/14, 3:05 PM, Noah Misch wrote:

On Wed, Dec 31, 2014 at 05:33:43PM +0000, Andrew Gierth wrote:

"Noah" == Noah Misch<noah@leadboat.com> writes:

Noah> Suppose one node orchestrated all sorting and aggregation.

Well, that has the downside of making it into an opaque blob, without
actually gaining much.

The opaque-blob criticism is valid. As for not gaining much, well, the gain I
sought was to break this stalemate. You and Tom have expressed willingness to
accept the read I/O multiplier of the CTE approach. You and I are willing to
swallow an architecture disruption, namely a tuplestore acting as a side
channel between executor nodes. Given your NACK, I agree that it fails to
move us toward consensus and therefore does not gain much. Alas.

I haven't read the full discussion in depth, but is what we'd want here is the ability to feed tuples to more than one node simultaneously?

A similar comment appeared shortly upthread. Given a planner and executor
capable of that, we would do so here. Changing the planner and executor
architecture to support it is its own large, open-ended project.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#106Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Noah Misch (#105)
1 attachment(s)
Re: Final Patch for GROUPING SETS

Herewith the latest version of the patch.

Stuff previously discussed but NOT changed in this version:

1. Still uses the ChainAggregate mechanism. As mentioned before, I'm
happy to abandon this given a better option, but I've not been given
anything to work with yet.

2. Still has GroupedVar. I have no objection in principle to taking
this out, but the details of doing so depend on the chain-agg vs.
possible Append/CTE approach (or other approach), so I don't want to
make unnecessary work by doing this prematurely and having to re-do
it.

Stuff changed:

1. Lotsa comments.

2. Node functions and declarations are re-ordered for consistency.
Renamed "Grouping" expression node to "GroupingFunc".

3. Memory usage is now constrained so that no more than 2, or in some
cases 3, sort nodes in a chain are active at one time. (I've tested
this by monitoring the memory usage for large cubes). (The case of 3
active sorts is if REWIND was requested and we added a sort node to
the input plan; while we have to re-sort the intermediate sorts on a
rewind if there are more than two of them, we keep the originally
sorted input to avoid rewinding the input plan.)

4. A problem of incorrect size estimation was corrected (thinko).

5. Tested, but provisionally rejected, the approach of preferring to
use Bitmapsets rather than integer lists. While there is a slight code
simplification (offset by greater confusion over whether we're dealing
with lists or bitmaps at any given point), and very minor performance
gain on contrived cases, the big drawback is that the order of clauses
in the query is destroyed even when it is surprising to the user to do
so.

6. CUBE and ROLLUP are unreserved, and ruleutils is modified to
schema-qualify uses of them as plain functions in group by clauses, in
addition to adding extra parens as the previous patch did. Accordingly
there are no changes to contrib/cube or contrib/earthdistance now.
(Upgrading from older versions may require --quote-all-identifiers if
for some bizarre reason cube() appears in a group by clause of a view.)

7. Fixed a bug in handling direct args of ordered-set aggs.

8. Some variable name cleanup and general tidying.

This is now one big patch (per Tom's gripe about the previous one
being split up, even though there were reasons for that).

One possible new issue is that the memory usage constraint now means
that the explain analyze output shows no memory stats for most of the
sort nodes. This is arguably more accurate, since if each node
displayed its actual memory usage it would look like the plan uses
more memory than it actually does; but it's still a bit odd. (It
happens because the preemptive rescan call discards the actual
statistics)

--
Andrew (irc:RhodiumToad)

Attachments:

gsp-all.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 2629bfc..543f3af 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2330,6 +2331,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) expr->aggfilter);
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grpnode = (GroupingFunc *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2607,6 +2615,12 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) lfirst(temp));
 			}
 			break;
+		case T_IntList:
+			foreach(temp, (List *) node)
+			{
+				APP_JUMB(lfirst_int(temp));
+			}
+			break;
 		case T_SortGroupClause:
 			{
 				SortGroupClause *sgc = (SortGroupClause *) node;
@@ -2617,6 +2631,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				APP_JUMB(sgc->nulls_first);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
 		case T_WindowClause:
 			{
 				WindowClause *wc = (WindowClause *) node;
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 5e7b000..b0c67f9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12063,7 +12063,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13161,6 +13163,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 7dbad46..56419c7 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1183,6 +1183,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 01d24a5..d2df959 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -621,23 +630,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group.
     (If there are aggregate functions but no <literal>GROUP BY</literal>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 8a0be5d..1a62292 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -79,6 +79,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -971,6 +974,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
@@ -1807,17 +1814,76 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 ancestors, es);
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 ancestors, es);
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	bool		useprefix;
+	char	   *exprstr;
+	ListCell   *lc;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = deparse_context_for_planstate((Node *) planstate,
+											ancestors,
+											es->rtable,
+											es->rtable_names);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2371,6 +2437,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 0e7400f..d4fb054 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -75,6 +75,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -182,6 +184,9 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						ExprContext *econtext,
+						bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -569,6 +574,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -646,8 +653,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -695,6 +718,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -3024,6 +3072,44 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingFuncExpr
+ *
+ * Return a bitmask with a bit for each (unevaluated) argument expression
+ * (rightmost arg is least significant bit).
+ *
+ * A bit is set if the corresponding expression is NOT part of the set of
+ * grouping expressions in the current grouping set.
+ */
+
+static Datum
+ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						 ExprContext *econtext,
+						 bool *isNull,
+						 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int attnum = 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		attnum = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4423,6 +4509,11 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
@@ -4491,6 +4582,27 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state = (ExprState *) astate;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grp_node = (GroupingFunc *) node;
+				GroupingFuncExprState *grp_state = makeNode(GroupingFuncExprState);
+				Agg		   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingFuncExpr;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 32697dd..91fa568 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
@@ -651,9 +652,10 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	/*
 	 * Don't examine the arguments or filters of Aggrefs or WindowFuncs,
 	 * because those do not represent expressions to be evaluated within the
-	 * overall targetlist's econtext.
+	 * overall targetlist's econtext.  GroupingFunc arguments are never
+	 * evaluated at all.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 08088ea..63cefaf 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -45,15 +45,16 @@
  *	  needed to allow resolution of a polymorphic aggregate's result type.
  *
  *	  We compute aggregate input expressions and run the transition functions
- *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at
- *	  least once per input tuple, so when the transvalue datatype is
+ *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at least
+ *	  once per input tuple, so when the transvalue datatype is
  *	  pass-by-reference, we have to be careful to copy it into a longer-lived
- *	  memory context, and free the prior value to avoid memory leakage.
- *	  We store transvalues in the memory context aggstate->aggcontext,
- *	  which is also used for the hashtable structures in AGG_HASHED mode.
- *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext)
- *	  is used to run finalize functions and compute the output tuple;
- *	  this context can be reset once per output tuple.
+ *	  memory context, and free the prior value to avoid memory leakage.  We
+ *	  store transvalues in the memory contexts aggstate->aggcontexts (one per
+ *	  grouping set, see below), which is also used for the hashtable structures
+ *	  in AGG_HASHED mode.  The node's regular econtext
+ *	  (aggstate->ss.ps.ps_ExprContext) is used to run finalize functions and
+ *	  compute the output tuple; this context can be reset once per output
+ *	  tuple.
  *
  *	  The executor's AggState node is passed as the fmgr "context" value in
  *	  all transfunc and finalfunc calls.  It is not recommended that the
@@ -84,6 +85,48 @@
  *	  need some fallback logic to use this, since there's no Aggref node
  *	  for a window function.)
  *
+ *	  Grouping sets:
+ *
+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.
+ *
+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.
+ *
+ *	  (Because the AGG_CHAINED node does not project aggregate values into the
+ *	  main executor path, its targetlist and qual are dummy, and it gets the
+ *	  real aggregate targetlist and qual from the chain head node.)
+ *
+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.
+ *
+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.
+ *
+ *	  TODO: AGG_HASHED doesn't support multiple grouping sets yet.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -241,9 +284,11 @@ typedef struct AggStatePerAggData
 	 * then at completion of the input tuple group, we scan the sorted values,
 	 * eliminate duplicates if needed, and run the transition function on the
 	 * rest.
+	 *
+	 * We need a separate tuplesort for each grouping set.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstates;	/* sort objects, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +349,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReset);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -325,6 +371,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -333,86 +380,105 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 /*
  * Initialize all aggregates for a new group of input values.
  *
+ * If there are multiple grouping sets, we initialize only the first numReset
+ * of them (the grouping sets are ordered so that the most specific one, which
+ * is reset most often, is first). As a convenience, if numReset is < 1, we
+ * reinitialize all sets.
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReset)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         setno = 0;
+
+	if (numReset < 1)
+		numReset = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (setno = 0; setno < numReset; setno++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstates[setno])
+					tuplesort_end(peraggstate->sortstates[setno]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 */
+				peraggstate->sortstates[setno] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
-		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+		for (setno = 0; setno < numReset; setno++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[setno]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +521,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +569,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +596,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +646,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +671,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * numAggs)];
+
+				aggstate->current_set = setno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +701,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +723,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,7 +735,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstates[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -698,8 +779,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +790,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +809,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstates[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,13 +863,16 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
  * Compute the final value of one aggregate.
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * The finalfunction will be run, and the result delivered, in the
  * output-tuple context; caller's CurrentMemoryContext does not matter.
  */
@@ -832,7 +919,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -916,7 +1003,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -946,7 +1034,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontexts[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1145,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1079,6 +1167,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1086,7 +1176,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1097,22 +1186,48 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
+	if (!node->agg_done)
+	{
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+		if (!TupIsNull(result))
+			return result;
+	}
+
+	/*
+	 * We've completed all locally computed projections, now we drain the side
+	 * channel of projections from chained nodes if any.
+	 */
+	if (!node->chain_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	return NULL;
 }
 
 /*
@@ -1132,6 +1247,12 @@ agg_retrieve_direct(AggState *aggstate)
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
 	int			aggno;
+	bool		hasGroupingSets = aggstate->numsets > 0;
+	int			numGroupingSets = Max(aggstate->numsets, 1);
+	int			currentSet = 0;
+	int			nextSetSize = 0;
+	int			numReset = 1;
+	int			i;
 
 	/*
 	 * get state info from node
@@ -1150,35 +1271,15 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
@@ -1191,90 +1292,223 @@ agg_retrieve_direct(AggState *aggstate)
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		/*
+		 * Determine how many grouping sets need to be reset at this boundary.
+		 */
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontexts[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontexts[i]->ecxt_per_tuple_memory);
+		}
+
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
 
 		/*
-		 * Initialize working state for a new input tuple group
+		 * Get the number of columns in the next grouping set after the last
+		 * projected one (if any). This is the number of columns to compare to
+		 * see if we reached the boundary of that set too.
+		 */
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			nextSetSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			nextSetSize = 0;
+
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& nextSetSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									nextSetSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			aggstate->projected_set += 1;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(nextSetSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * We no longer care what group we just projected, the next
+			 * projection will always be the first (or only) grouping set
+			 * (unless the input proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch
+			 * it from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
+				{
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+				}
+				else
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/* outer plan produced no tuples at all */
+					if (hasGroupingSets)
+					{
+						/*
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 */
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group.
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
 
+			if (aggstate->grp_firstTuple != NULL)
+			{
 				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
 				 */
-				if (node->aggstrategy == AGG_SORTED)
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
+
+				/*
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
+				 */
+				for (;;)
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
 					{
-						/*
-						 * Save the first input tuple of the next group.
-						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						/* no more outer-plan tuples available */
+						if (hasGroupingSets)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentSet = aggstate->projected_set;
+
+		if (hasGroupingSets)
+			econtext->grouped_cols = aggstate->grouped_cols[currentSet];
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1322,6 +1556,175 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that might have
+	 * been generated by previous rows, and if this is not the first row, then
+	 * grp_firsttuple has the representative input row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontexts[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontexts[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+
+		/*
+		 * We're out of input, so the calling node has all the data it needs
+		 * and (if it's a Sort) is about to sort it. We preemptively request a
+		 * rescan of our input plan here, so that Sort nodes containing data
+		 * that is no longer needed will free their memory.  The intention here
+		 * is to bound the peak memory requirement for the whole chain to
+		 * 2*work_mem if REWIND was not requested, or 3*work_mem if REWIND was
+		 * requested and we had to supply a Sort node for the original data
+		 * source plan.
+		 */
+
+		ExecReScan(outerPlanState(aggstate));
+
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	/* Actually accumulate the current tuple. */
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1489,12 +1892,17 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int			numGroupingSets = 1;
+	int			currentsortno = 0;
+	int			i = 0;
+	int			j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1916,76 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_eflags = eflags & EXEC_FLAG_REWIND;
+	aggstate->chain_top = false;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
+
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontexts = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts (aggcontexts) replaces the standalone
+	 * memory context formerly used to hold transition values.  We cheat a
+	 * little by using ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontexts and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
-	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1557,24 +2003,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		AggState   *chain_head = estate->agg_chain_head;
+		Agg		   *chain_head_plan;
+
+		Assert(chain_head);
+
+		aggstate->chain_head = chain_head;
+		chain_head->chain_depth++;
+
+		chain_head_plan = (Agg *) chain_head->ss.ps.plan;
+
+		/*
+		 * If we reached the originally declared depth, we must be the "top"
+		 * (furthest from plan root) node in the chain.
+		 */
+		if (chain_head_plan->chain_depth == chain_head->chain_depth)
+			aggstate->chain_top = true;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_depth > 0)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
-	 * initialize child nodes
+	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
+	 *
+	 * If we have more than one associated ChainAggregate node, then we turn
+	 * off REWIND and restore it in the chain top, so that the intermediate
+	 * Sort nodes will discard their data on rescan.  This lets us put an upper
+	 * bound on the memory usage, even when we have a long chain of sorts (at
+	 * the cost of having to re-sort on rewind, which is why we don't do it
+	 * for only one node where no memory would be saved).
 	 */
-	if (node->aggstrategy == AGG_HASHED)
+	if (aggstate->chain_top)
+		eflags |= aggstate->chain_head->chain_eflags;
+	else if (node->aggstrategy == AGG_HASHED || node->chain_depth > 1)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_depth > 0)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1583,8 +2083,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -1645,7 +2172,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
+											  * numaggs
+											  * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +2238,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstates = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstates[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2549,38 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			setno;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			if (peraggstate->sortstates[setno])
+				tuplesort_end(peraggstate->sortstates[setno]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (setno = 0; setno < numGroupingSets; setno++)
+		ReScanExprContext(node->aggcontexts[setno]);
+
+	if (node->chain_tuplestore && node->chain_depth > 0)
+		tuplestore_end(node->chain_tuplestore);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2049,13 +2589,16 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         setno;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2081,14 +2624,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstates[setno])
+			{
+				tuplesort_end(peraggstate->sortstates[setno]);
+				peraggstate->sortstates[setno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (setno = 0; setno < numGroupingSets; setno++)
+	{
+		ReScanExprContext(node->aggcontexts[setno]);
+		MemoryContextDeleteChildren(node->aggcontexts[setno]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2096,21 +2660,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2122,15 +2678,54 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed.  Since we're preempting the rescan
+	 * in some cases, we only check whether any ChainAgg node was reached in
+	 * the rescan; the others may have already been reset.
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_depth > 0)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan > 0)
+				tuplestore_clear(node->chain_tuplestore);
+			else
+				tuplestore_rescan(node->chain_tuplestore);
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
@@ -2150,8 +2745,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2159,7 +2757,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2243,8 +2845,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index f1a24f5..a9c679d 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -802,6 +802,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_depth);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
@@ -809,6 +810,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1095,6 +1097,27 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -1177,6 +1200,23 @@ _copyAggref(const Aggref *from)
 }
 
 /*
+ * _copyGroupingFunc
+ */
+static GroupingFunc *
+_copyGroupingFunc(const GroupingFunc *from)
+{
+	GroupingFunc	   *newnode = makeNode(GroupingFunc);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_SCALAR_FIELD(agglevelsup);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyWindowFunc
  */
 static WindowFunc *
@@ -2076,6 +2116,18 @@ _copySortGroupClause(const SortGroupClause *from)
 	return newnode;
 }
 
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
 static WindowClause *
 _copyWindowClause(const WindowClause *from)
 {
@@ -2526,6 +2578,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4142,6 +4195,9 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
@@ -4151,6 +4207,9 @@ copyObject(const void *from)
 		case T_Aggref:
 			retval = _copyAggref(from);
 			break;
+		case T_GroupingFunc:
+			retval = _copyGroupingFunc(from);
+			break;
 		case T_WindowFunc:
 			retval = _copyWindowFunc(from);
 			break;
@@ -4711,6 +4770,9 @@ copyObject(const void *from)
 		case T_SortGroupClause:
 			retval = _copySortGroupClause(from);
 			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_WindowClause:
 			retval = _copyWindowClause(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 6e8b308..1eb35d2 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,22 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -208,6 +224,21 @@ _equalAggref(const Aggref *a, const Aggref *b)
 }
 
 static bool
+_equalGroupingFunc(const GroupingFunc *a, const GroupingFunc *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_SCALAR_FIELD(agglevelsup);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowFunc(const WindowFunc *a, const WindowFunc *b)
 {
 	COMPARE_SCALAR_FIELD(winfnoid);
@@ -865,6 +896,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2388,6 +2420,16 @@ _equalSortGroupClause(const SortGroupClause *a, const SortGroupClause *b)
 }
 
 static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowClause(const WindowClause *a, const WindowClause *b)
 {
 	COMPARE_STRING_FIELD(name);
@@ -2582,6 +2624,9 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
@@ -2591,6 +2636,9 @@ equal(const void *a, const void *b)
 		case T_Aggref:
 			retval = _equalAggref(a, b);
 			break;
+		case T_GroupingFunc:
+			retval = _equalGroupingFunc(a, b);
+			break;
 		case T_WindowFunc:
 			retval = _equalWindowFunc(a, b);
 			break;
@@ -3138,6 +3186,9 @@ equal(const void *a, const void *b)
 		case T_SortGroupClause:
 			retval = _equalSortGroupClause(a, b);
 			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_WindowClause:
 			retval = _equalWindowClause(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 94cab47..a6737514 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 6fdf44d..a9b58eb 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 21dfda7..0084eb0 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,9 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -54,6 +57,9 @@ exprType(const Node *expr)
 		case T_Aggref:
 			type = ((const Aggref *) expr)->aggtype;
 			break;
+		case T_GroupingFunc:
+			type = INT4OID;
+			break;
 		case T_WindowFunc:
 			type = ((const WindowFunc *) expr)->wintype;
 			break;
@@ -261,6 +267,8 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +742,9 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -743,6 +754,9 @@ exprCollation(const Node *expr)
 		case T_Aggref:
 			coll = ((const Aggref *) expr)->aggcollid;
 			break;
+		case T_GroupingFunc:
+			coll = InvalidOid;
+			break;
 		case T_WindowFunc:
 			coll = ((const WindowFunc *) expr)->wincollid;
 			break;
@@ -967,6 +981,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -976,6 +993,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Aggref:
 			((Aggref *) expr)->aggcollid = collation;
 			break;
+		case T_GroupingFunc:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_WindowFunc:
 			((WindowFunc *) expr)->wincollid = collation;
 			break;
@@ -1182,6 +1202,9 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1192,6 +1215,9 @@ exprLocation(const Node *expr)
 			/* function name should always be the first thing */
 			loc = ((const Aggref *) expr)->location;
 			break;
+		case T_GroupingFunc:
+			loc = ((const GroupingFunc *) expr)->location;
+			break;
 		case T_WindowFunc:
 			/* function name should always be the first thing */
 			loc = ((const WindowFunc *) expr)->location;
@@ -1471,6 +1497,9 @@ exprLocation(const Node *expr)
 			/* XMLSERIALIZE keyword should always be the first thing */
 			loc = ((const XmlSerialize *) expr)->location;
 			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_WithClause:
 			loc = ((const WithClause *) expr)->location;
 			break;
@@ -1622,6 +1651,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1685,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grouping = (GroupingFunc *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2183,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2185,6 +2233,29 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc   *grouping = (GroupingFunc *) node;
+				GroupingFunc   *newnode;
+
+				FLATCOPY(newnode, grouping, GroupingFunc);
+				MUTATE(newnode->args, grouping->args, List *);
+
+				/*
+				 * We assume here that mutating the arguments does not change
+				 * the semantics, i.e. that the arguments are not mutated in a
+				 * way that makes them semantically different from their
+				 * previously matching expressions in the GROUP BY clause.
+				 *
+				 * If a mutator somehow wanted to do this, it would have to
+				 * handle the refs and cols lists itself as appropriate.
+				 */
+				newnode->refs = list_copy(grouping->refs);
+				newnode->cols = list_copy(grouping->cols);
+
+				return (Node *) newnode;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
@@ -2870,6 +2941,8 @@ raw_expression_tree_walker(Node *node,
 			break;
 		case T_RangeVar:
 			return walker(((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return walker(((GroupingFunc *) node)->args, context);
 		case T_SubLink:
 			{
 				SubLink    *sublink = (SubLink *) node;
@@ -3193,6 +3266,8 @@ raw_expression_tree_walker(Node *node,
 				/* for now, constraints are ignored */
 			}
 			break;
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		case T_LockingClause:
 			return walker(((LockingClause *) node)->lockedRels, context);
 		case T_XmlSerialize:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index dd1278b..c94c952 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -646,6 +646,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_INT_FIELD(chain_depth);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
@@ -657,6 +658,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -926,6 +929,22 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -980,6 +999,18 @@ _outAggref(StringInfo str, const Aggref *node)
 }
 
 static void
+_outGroupingFunc(StringInfo str, const GroupingFunc *node)
+{
+	WRITE_NODE_TYPE("GROUPINGFUNC");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_INT_FIELD(agglevelsup);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowFunc(StringInfo str, const WindowFunc *node)
 {
 	WRITE_NODE_TYPE("WINDOWFUNC");
@@ -2303,6 +2334,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2337,6 +2369,16 @@ _outSortGroupClause(StringInfo str, const SortGroupClause *node)
 }
 
 static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowClause(StringInfo str, const WindowClause *node)
 {
 	WRITE_NODE_TYPE("WINDOWCLAUSE");
@@ -2950,6 +2992,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
@@ -2959,6 +3004,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Aggref:
 				_outAggref(str, obj);
 				break;
+			case T_GroupingFunc:
+				_outGroupingFunc(str, obj);
+				break;
 			case T_WindowFunc:
 				_outWindowFunc(str, obj);
 				break;
@@ -3216,6 +3264,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_SortGroupClause:
 				_outSortGroupClause(str, obj);
 				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_WindowClause:
 				_outWindowClause(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index ae24d05..4b9f29d 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -216,6 +216,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -291,6 +292,21 @@ _readSortGroupClause(void)
 }
 
 /*
+ * _readGroupingSet
+ */
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowClause
  */
 static WindowClause *
@@ -441,6 +457,27 @@ _readVar(void)
 }
 
 /*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readConst
  */
 static Const *
@@ -510,6 +547,23 @@ _readAggref(void)
 }
 
 /*
+ * _readGroupingFunc
+ */
+static GroupingFunc *
+_readGroupingFunc(void)
+{
+	READ_LOCALS(GroupingFunc);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_INT_FIELD(agglevelsup);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowFunc
  */
 static WindowFunc *
@@ -1305,6 +1359,8 @@ parseNodeString(void)
 		return_value = _readWithCheckOption();
 	else if (MATCH("SORTGROUPCLAUSE", 15))
 		return_value = _readSortGroupClause();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("WINDOWCLAUSE", 12))
 		return_value = _readWindowClause();
 	else if (MATCH("ROWMARKCLAUSE", 13))
@@ -1321,12 +1377,16 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
 		return_value = _readParam();
 	else if (MATCH("AGGREF", 6))
 		return_value = _readAggref();
+	else if (MATCH("GROUPINGFUNC", 12))
+		return_value = _readGroupingFunc();
 	else if (MATCH("WINDOWFUNC", 10))
 		return_value = _readWindowFunc();
 	else if (MATCH("ARRAYREF", 8))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 58d78e6..2c05f71 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1241,6 +1241,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2099,7 +2100,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 11d3933..fa1de6a 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -581,6 +581,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -649,10 +650,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -668,6 +669,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 655be81..e5945f9 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1029,6 +1029,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
+								 NULL,
 								 numGroups,
 								 subplan);
 	}
@@ -4357,6 +4359,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets, int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4366,6 +4369,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_depth = chain_depth_p ? *chain_depth_p : 0;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4386,10 +4390,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
@@ -4408,8 +4414,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_depth_p);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index b90c2ef..7d1ea47 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 9cbbcfb..2e69fcb 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,12 +16,14 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +39,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -65,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -77,7 +81,9 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -317,6 +323,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -533,7 +541,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1176,11 +1185,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1189,15 +1193,90 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
 		if (parse->groupClause)
-			preprocess_groupclause(root);
+		{
+			ListCell   *lc;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+		}
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
+
+			foreach(lc_set, sets)
+			{
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
+
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1270,6 +1349,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1281,6 +1361,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1307,15 +1388,46 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
 		{
 			List	   *groupExprs;
 
-			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
-												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc,
+						   *lc2;
+
+				dNumGroups = 0;
+
+				forboth(lc, rollup_groupclauses, lc2, rollup_lists)
+				{
+					ListCell   *lc3;
+
+					groupExprs = get_sortgrouplist_exprs(lfirst(lc),
+														 parse->targetList);
+
+					foreach(lc3, lfirst(lc2))
+					{
+						List   *gset = lfirst(lc3);
+
+						dNumGroups += estimate_num_groups(root,
+														  groupExprs,
+														  path_rows,
+														  &gset);
+					}
+				}
+			}
+			else
+			{
+				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
+													 parse->targetList);
+
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
+			}
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1326,6 +1438,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1341,14 +1456,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
-			 * and it will deliver a single result row (so leave dNumGroups
-			 * set to 1).
+			 * and it will deliver a single result row per grouping set (or 1
+			 * if no grouping sets were explicitly given, in which case leave
+			 * dNumGroups as-is)
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1363,7 +1481,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1446,13 +1564,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping.
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1518,7 +1647,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1593,52 +1722,118 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
+												NIL,
+												NULL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				int			chain_depth = 0;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						if (gsets)
+						{
+							Assert(refmap);
+
+							/*
+							 * need to remap groupColIdx, which has the column
+							 * indices for every clause in parse->groupClause
+							 * indexed by list position, to a local version for
+							 * this node which lists only the clauses included in
+							 * groupClause by position in that list. The refmap for
+							 * this node (indexed by sortgroupref) contains 0 for
+							 * clauses not present in this node's groupClause.
+							 */
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+
+							/*
+							 * If there aren't any other chained aggregates, then
+							 * we didn't disturb the originally required input
+							 * sort order.
+							 */
+							if (chain_depth == 0)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													(aggstrategy != AGG_CHAINED) ? &chain_depth : NULL,
+													numGroups,
+													result_plan);
+
+					chain_depth += 1;
+
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
 				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
-					current_pathkeys = NIL;
-				}
-
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												numGroups,
-												result_plan);
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -1669,27 +1864,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1852,7 +2086,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1893,6 +2127,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
+											NULL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2526,19 +2762,38 @@ limit_needed(Query *parse)
  *
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
+ *
+ * For grouping sets, the order of items is instead forced to agree with that
+ * of the grouping set (and items not in the grouping set are skipped). The
+ * work of sorting the order of grouping set elements to match the ORDER BY if
+ * possible is done elsewhere.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2546,7 +2801,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2570,7 +2824,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2587,15 +2841,446 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
+ *
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.  This implementation is based on pseudocode
+ * found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
+ *
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets)
+{
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
+	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
+
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
+	{
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
+
+		foreach(lc2, candidate)
+		{
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
+			{
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
+			}
+		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
+
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
+		{
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
+		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
+		else
+			chains[i] = ++num_chains;
+	}
+
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
+	/*
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
+	 */
+
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
+
+	foreach(lc, groupingsets)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					/* diverged from the sortclause; give up on it */
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+
+	return result;
 }
 
 /*
@@ -2614,11 +3299,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
@@ -3043,7 +3728,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 7703946..6aa9fc1 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -661,6 +669,17 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1073,6 +1092,7 @@ copyVar(Var *var)
  * We must look up operator opcode info for OpExpr and related nodes,
  * add OIDs from regclass Const nodes into root->glob->relationOids, and
  * add catalog TIDs for user-defined functions into root->glob->invalItems.
+ * We also fill in column index lists for GROUPING() expressions.
  *
  * We assume it's okay to update opcode info in-place.  So this could possibly
  * scribble on the planner's input data structures, but it's OK.
@@ -1136,6 +1156,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *g = (GroupingFunc *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1263,6 +1308,98 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
+	context.root = root;
+
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref))
+	{
+		/*
+		 * don't recurse into the arguments or filter of Aggrefs, since they
+		 * see the values prior to grouping.  But do recurse into direct args
+		 * if any.
+		 */
+
+		if (((Aggref *)node)->aggdirectargs != NIL)
+		{
+			Aggref *newnode = palloc(sizeof(Aggref));
+
+			memcpy(newnode, node, sizeof(Aggref));
+
+			newnode->aggdirectargs
+				= (List *) expression_tree_mutator((Node *) newnode->aggdirectargs,
+												   set_group_vars_mutator,
+												   (void *) context);
+
+			return (Node *) newnode;
+		}
+
+		return node;
+	}
+	else if (IsA(node, GroupingFunc))
+	{
+		/*
+		 * GroupingFuncs don't see the values at all.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 78fb6b1..690407c 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -336,6 +337,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given GroupingFunc expression which is
+ * expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the GroupingFunc belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (GroupingFunc *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,14 +1533,16 @@ simplify_EXISTS_query(PlannerInfo *root, Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, OFFSET, or FOR UPDATE/SHARE; none
-	 * of these seem likely in normal usage and their possible effects are
-	 * complex.  (Note: we could ignore an "OFFSET 0" clause, but that
-	 * traditionally is used as an optimization fence, so we don't.)
+	 * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.  (Note: we could ignore an "OFFSET 0"
+	 * clause, but that traditionally is used as an optimization fence, so we
+	 * don't.)
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1847,6 +1892,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (GroupingFunc *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
@@ -2077,7 +2127,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2128,7 +2178,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2343,7 +2393,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2359,7 +2410,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2375,7 +2427,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2391,7 +2444,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2407,7 +2461,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2474,8 +2529,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_depth > 0)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2492,7 +2569,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2501,7 +2579,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2512,7 +2591,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 8a0199b..00ae12c 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 05f601e..01d1af7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,8 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
+								 NULL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index b340b01..08f52c8 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4304,6 +4304,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 1395a21..e88f728 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index a1a504b..f702b8c 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index 8f86432..0f25539 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index a68f2e8..fe93b87 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -964,6 +964,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1010,7 +1011,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1470,7 +1471,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 679e1bb..cb53fc0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -366,6 +366,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -431,7 +435,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -553,7 +557,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -568,7 +572,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -602,11 +606,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -664,6 +668,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -674,7 +683,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -10136,11 +10145,79 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11709,15 +11786,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  GroupingFunc *g = makeNode(GroupingFunc);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12467,6 +12562,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13196,6 +13298,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13344,6 +13447,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13358,6 +13462,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13441,6 +13546,7 @@ col_name_keyword:
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 7b0e668..19391d0 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingFunc
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingFunc(ParseState *pstate, GroupingFunc *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	GroupingFunc *result = makeNode(GroupingFunc);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		GroupingFunc *grp = (GroupingFunc *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		int			agglevelsup = ((GroupingFunc *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,67 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = linitial(gsets);
+
+		if (gset_common)
+		{
+			for_each_cell(l, lnext(list_head(gsets)))
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1006,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1040,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1072,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1132,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1196,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/* handled GroupingFunc separately, no need to recheck at this level */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1217,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1246,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1285,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1330,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/*
+		 * We only need to check GroupingFunc nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_asc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, shortest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 654dce6..126699a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,182 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	/* just in case of pathological input */
+	check_stack_depth();
+
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
 
-	foreach(gl, grouplist)
+	switch (expr->type)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
+/*
+ * Transform a single expression within a GROUP BY clause or grouping set.
+ *
+ * The expression is added to the targetlist if not already present, and to the
+ * flatresult list (which will become the groupClause) if not already present
+ * there.  The sortClause is consulted for operator and sort order hints.
+ *
+ * Returns the ressortgroupref of the expression.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * seen_local	bitmapset of sortgrouprefs already seen at the local level
+ * pstate		ParseState
+ * gexpr		node to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
+	{
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1851,308 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+/*
+ * Transform a list of expressions within a GROUP BY clause or grouping set.
+ *
+ * The list of expressions belongs to a single clause within which duplicates
+ * can be safely eliminated.
+ *
+ * Returns an integer list of ressortgroupref values.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * list			nodes to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Transform a grouping set and (recursively) its content.
+ *
+ * The grouping set might be a GROUPING SETS node with other grouping sets
+ * inside it, but SETS within SETS have already been flattened out before
+ * reaching here.
+ *
+ * Returns the transformed node, which now contains SIMPLE nodes with lists
+ * of ressortgrouprefs rather than expressions.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * gset			grouping set to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ *
+ * Returns the transformed (flat) groupClause.
+ *
+ * pstate		ParseState
+ * grouplist	clause to transform
+ * groupingSets	reference to list to contain the grouping set tree
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2257,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index f0f0488..dea74cc 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -261,6 +262,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 			result = transformMultiAssignRef(pstate, (MultiAssignRef *) expr);
 			break;
 
+		case T_GroupingFunc:
+			result = transformGroupingFunc(pstate, (GroupingFunc *) expr);
+			break;
+
 		case T_NamedArgExpr:
 			{
 				NamedArgExpr *na = (NamedArgExpr *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 3724330..7125b76 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1675,6 +1675,10 @@ FigureColnameInternal(Node *node, char **name)
 			break;
 		case T_CollateClause:
 			return FigureColnameInternal(((CollateClause *) node)->arg, name);
+		case T_GroupingFunc:
+			/* make GROUPING() act like a regular function */
+			*name = "grouping";
+			return 2;
 		case T_SubLink:
 			switch (((SubLink *) node)->subLinkType)
 			{
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index b8e6e7a..6e82a6b 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2109,7 +2109,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index 75dd41e..3ab8f18 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,12 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up)
+			return true;
+		/* else fall through to examine argument */
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +163,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up &&
+			((GroupingFunc *) node)->location >= 0)
+		{
+			context->agg_location = ((GroupingFunc *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -703,6 +718,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc   *grp = (GroupingFunc *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index dd748ac..e16f947 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -42,6 +42,7 @@
 #include "nodes/nodeFuncs.h"
 #include "optimizer/tlist.h"
 #include "parser/keywords.h"
+#include "parser/parse_node.h"
 #include "parser/parse_agg.h"
 #include "parser/parse_func.h"
 #include "parser/parse_oper.h"
@@ -103,6 +104,8 @@ typedef struct
 	int			wrapColumn;		/* max line length, or -1 for no limit */
 	int			indentLevel;	/* current indent level for prettyprint */
 	bool		varprefix;		/* TRUE to print prefixes on Vars */
+	ParseExprKind special_exprkind;	/* set only for exprkinds needing */
+									/* special handling */
 } deparse_context;
 
 /*
@@ -361,9 +364,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -411,8 +416,9 @@ static void printSubscripts(ArrayRef *aref, deparse_context *context);
 static char *get_relation_name(Oid relid);
 static char *generate_relation_name(Oid relid, List *namespaces);
 static char *generate_function_name(Oid funcid, int nargs,
-					   List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p);
+							List *argnames, Oid *argtypes,
+							bool has_variadic, bool *use_variadic_p,
+							ParseExprKind special_exprkind);
 static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2);
 static text *string_to_text(char *str);
 static char *flatten_reloptions(Oid relid);
@@ -870,6 +876,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 		context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		get_rule_expr(qual, &context, false);
 
@@ -879,7 +886,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 	appendStringInfo(&buf, "EXECUTE PROCEDURE %s(",
 					 generate_function_name(trigrec->tgfoid, 0,
 											NIL, NULL,
-											false, NULL));
+											false, NULL, EXPR_KIND_NONE));
 
 	if (trigrec->tgnargs > 0)
 	{
@@ -2476,6 +2483,7 @@ deparse_expression_pretty(Node *expr, List *dpcontext,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = WRAP_COLUMN_DEFAULT;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	get_rule_expr(expr, &context, showimplicit);
 
@@ -4046,6 +4054,7 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc,
 		context.prettyFlags = prettyFlags;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		set_deparse_for_query(&dpns, query, NIL);
 
@@ -4197,6 +4206,7 @@ get_query_def(Query *query, StringInfo buf, List *parentnamespace,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = wrapColumn;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	set_deparse_for_query(&dpns, query, parentnamespace);
 
@@ -4538,7 +4548,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4563,20 +4573,43 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
+		ParseExprKind	save_exprkind;
+
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		save_exprkind = context->special_exprkind;
+		context->special_exprkind = EXPR_KIND_GROUP_BY;
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
 		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
+		}
+
+		context->special_exprkind = save_exprkind;
 	}
 
 	/* Add the HAVING clause if given */
@@ -4643,7 +4676,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4862,23 +4895,24 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4887,13 +4921,92 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4913,7 +5026,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5013,7 +5126,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5562,10 +5675,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5587,10 +5700,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5610,10 +5723,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5653,10 +5766,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6687,6 +6800,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -6699,6 +6816,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			get_agg_expr((Aggref *) node, context);
 			break;
 
+		case T_GroupingFunc:
+			{
+				GroupingFunc *gexpr = (GroupingFunc *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_WindowFunc:
 			get_windowfunc_expr((WindowFunc *) node, context);
 			break;
@@ -7737,7 +7864,8 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
 					 generate_function_name(funcoid, nargs,
 											argnames, argtypes,
 											expr->funcvariadic,
-											&use_variadic));
+											&use_variadic,
+											context->special_exprkind));
 	nargs = 0;
 	foreach(l, expr->args)
 	{
@@ -7769,7 +7897,8 @@ get_agg_expr(Aggref *aggref, deparse_context *context)
 					 generate_function_name(aggref->aggfnoid, nargs,
 											NIL, argtypes,
 											aggref->aggvariadic,
-											&use_variadic),
+											&use_variadic,
+											context->special_exprkind),
 					 (aggref->aggdistinct != NIL) ? "DISTINCT " : "");
 
 	if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
@@ -7859,7 +7988,8 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
 	appendStringInfo(buf, "%s(",
 					 generate_function_name(wfunc->winfnoid, nargs,
 											argnames, argtypes,
-											false, NULL));
+											false, NULL,
+											context->special_exprkind));
 	/* winstar can be set only in zero-argument aggregates */
 	if (wfunc->winstar)
 		appendStringInfoChar(buf, '*');
@@ -9089,7 +9219,8 @@ generate_relation_name(Oid relid, List *namespaces)
  */
 static char *
 generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p)
+					   bool has_variadic, bool *use_variadic_p,
+					   ParseExprKind special_exprkind)
 {
 	char	   *result;
 	HeapTuple	proctup;
@@ -9104,6 +9235,7 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	int			p_nvargs;
 	Oid			p_vatype;
 	Oid		   *p_true_typeids;
+	bool		force_qualify = false;
 
 	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
 	if (!HeapTupleIsValid(proctup))
@@ -9112,6 +9244,17 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	proname = NameStr(procform->proname);
 
 	/*
+	 * Thanks to parser hacks to avoid needing to reserve CUBE, we
+	 * need to force qualification in some special cases.
+	 */
+
+	if (special_exprkind == EXPR_KIND_GROUP_BY)
+	{
+		if (strcmp(proname, "cube") == 0 || strcmp(proname, "rollup") == 0)
+			force_qualify = true;
+	}
+
+	/*
 	 * Determine whether VARIADIC should be printed.  We must do this first
 	 * since it affects the lookup rules in func_get_detail().
 	 *
@@ -9142,14 +9285,23 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	/*
 	 * The idea here is to schema-qualify only if the parser would fail to
 	 * resolve the correct function given the unqualified func name with the
-	 * specified argtypes and VARIADIC flag.
+	 * specified argtypes and VARIADIC flag.  But if we already decided to
+	 * force qualification, then we can skip the lookup and pretend we didn't
+	 * find it.
 	 */
-	p_result = func_get_detail(list_make1(makeString(proname)),
-							   NIL, argnames, nargs, argtypes,
-							   !use_variadic, true,
-							   &p_funcid, &p_rettype,
-							   &p_retset, &p_nvargs, &p_vatype,
-							   &p_true_typeids, NULL);
+	if (!force_qualify)
+		p_result = func_get_detail(list_make1(makeString(proname)),
+								   NIL, argnames, nargs, argtypes,
+								   !use_variadic, true,
+								   &p_funcid, &p_rettype,
+								   &p_retset, &p_nvargs, &p_vatype,
+								   &p_true_typeids, NULL);
+	else
+	{
+		p_result = FUNCDETAIL_NOTFOUND;
+		p_funcid = InvalidOid;
+	}
+
 	if ((p_result == FUNCDETAIL_NORMAL ||
 		 p_result == FUNCDETAIL_AGGREGATE ||
 		 p_result == FUNCDETAIL_WINDOWFUNC) &&
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 1ba103c..ceb7663 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index 6e26950..11b1f77 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -82,6 +82,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 41288ed..052ea0a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -407,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -595,6 +602,21 @@ typedef struct AggrefExprState
 } AggrefExprState;
 
 /* ----------------
+ *		GroupingFuncExprState node
+ *
+ * The list of column numbers refers to the input tuples of the Agg node to
+ * which the GroupingFunc belongs, and may contain 0 for references to columns
+ * that are only present in grouping sets processed by different Agg nodes (and
+ * which are therefore always considered "grouping" here).
+ * ----------------
+ */
+typedef struct GroupingFuncExprState
+{
+	ExprState	xprstate;
+	List	   *clauses;		/* integer list of column numbers */
+} GroupingFuncExprState;
+
+/* ----------------
  *		WindowFuncExprState node
  * ----------------
  */
@@ -1742,19 +1764,27 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontexts;	/* econtexts for long-lived data (per GS) */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
@@ -1764,6 +1794,12 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	int			chain_eflags;	/* saved eflags for rewind optimization */
+	bool		chain_top;		/* true for the "top" node in a chain */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 4dff6a0..01d9fed 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 97ef0fc..4d56f50 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -131,9 +131,11 @@ typedef enum NodeTag
 	T_RangeVar,
 	T_Expr,
 	T_Var,
+	T_GroupedVar,
 	T_Const,
 	T_Param,
 	T_Aggref,
+	T_GroupingFunc,
 	T_WindowFunc,
 	T_ArrayRef,
 	T_FuncExpr,
@@ -184,6 +186,7 @@ typedef enum NodeTag
 	T_GenericExprState,
 	T_WholeRowVarExprState,
 	T_AggrefExprState,
+	T_GroupingFuncExprState,
 	T_WindowFuncExprState,
 	T_ArrayRefExprState,
 	T_FuncExprState,
@@ -401,6 +404,7 @@ typedef enum NodeTag
 	T_RangeTblFunction,
 	T_WithCheckOption,
 	T_SortGroupClause,
+	T_GroupingSet,
 	T_WindowClause,
 	T_PrivGrantee,
 	T_FuncWithArgs,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b1dfa85..815a786 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -136,6 +136,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of GroupingSet's if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
@@ -933,6 +935,73 @@ typedef struct SortGroupClause
 } SortGroupClause;
 
 /*
+ * GroupingSet -
+ *		representation of CUBE, ROLLUP and GROUPING SETS clauses
+ *
+ * In a Query with grouping sets, the groupClause contains a flat list of
+ * SortGroupClause nodes for each distinct expression used.  The actual
+ * structure of the GROUP BY clause is given by the groupingSets tree.
+ *
+ * In the raw parser output, GroupingSet nodes (of all types except SIMPLE
+ * which is not used) are potentially mixed in with the expressions in the
+ * groupClause of the SelectStmt.  (An expression can't contain a GroupingSet,
+ * but a list may mix GroupingSet and expression nodes.)  At this stage, the
+ * content of each node is a list of expressions, some of which may be RowExprs
+ * which represent sublists rather than actual row constructors, and nested
+ * GroupingSet nodes where legal in the grammar.  The structure directly
+ * reflects the query syntax.
+ *
+ * In parse analysis, the transformed expressions are used to build the tlist
+ * and groupClause list (of SortGroupClause nodes), and the groupingSets tree
+ * is eventually reduced to a fixed format:
+ *
+ * EMPTY nodes represent (), and obviously have no content
+ *
+ * SIMPLE nodes represent a list of one or more expressions to be treated as an
+ * atom by the enclosing structure; the content is an integer list of
+ * ressortgroupref values (see SortGroupClause)
+ *
+ * CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
+ *
+ * SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
+ * parse analysis they cannot contain more SETS nodes; enough of the syntactic
+ * transforms of the spec have been applied that we no longer have arbitrarily
+ * deep nesting (though we still preserve the use of cube/rollup).
+ *
+ * Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
+ * nodes at the leaves), then the groupClause will be empty, but this is still
+ * an aggregation query (similar to using aggs or HAVING without GROUP BY).
+ *
+ * As an example, the following clause:
+ *
+ * GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
+ *
+ * looks like this after raw parsing:
+ *
+ * SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
+ *
+ * and parse analysis converts it to:
+ *
+ * SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
+ */
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	NodeTag		type;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
+/*
  * WindowClause -
  *		transformed representation of WINDOW and OVER clauses
  *
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index a175000..729456d 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 316c9ce..d44ca52 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -655,6 +655,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -662,10 +663,12 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	int			chain_depth;	/* number of associated ChainAggs in tree */
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 1d06f42..2425658 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -160,6 +160,22 @@ typedef struct Var
 } Var;
 
 /*
+ * GroupedVar - expression node representing a variable that might be
+ * involved in a grouping set.
+ *
+ * This is identical to Var node except in execution; when evaluated it
+ * is conditionally NULL depending on the active grouping set.  Vars are
+ * converted to GroupedVars (if needed) only late in planning.
+ *
+ * (Because they appear only late in planning, most code that handles Vars
+ * doesn't need to know about these, either because they don't exist yet or
+ * because optimizations specific to Vars are intentionally not applied to
+ * GroupedVars.)
+ */
+
+typedef Var GroupedVar;
+
+/*
  * Const
  */
 typedef struct Const
@@ -273,6 +289,41 @@ typedef struct Aggref
 } Aggref;
 
 /*
+ * GroupingFunc
+ *
+ * A GroupingFunc is a GROUPING(...) expression, which behaves in many ways
+ * like an aggregate function (e.g. it "belongs" to a specific query level,
+ * which might not be the one immediately containing it), but also differs in
+ * an important respect: it never evaluates its arguments, they merely
+ * designate expressions from the GROUP BY clause of the query level to which
+ * it belongs.
+ *
+ * The spec defines the evaluation of GROUPING() purely by syntactic
+ * replacement, but we make it a real expression for optimization purposes so
+ * that one Agg node can handle multiple grouping sets at once.  Evaluating the
+ * result only needs the column positions to check against the grouping set
+ * being projected.  However, for EXPLAIN to produce meaningful output, we have
+ * to keep the original expressions around, since expression deparse does not
+ * give us any feasible way to get at the GROUP BY clause.
+ *
+ * Also, we treat two GroupingFunc nodes as equal if they have equal arguments
+ * lists and agglevelsup, without comparing the refs and cols annotations.
+ *
+ * In raw parse output we have only the args list; parse analysis fills in the
+ * refs list, and the planner fills in the cols list.
+ */
+typedef struct GroupingFunc
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+	int			location;		/* token location */
+} GroupingFunc;
+
+/*
  * WindowFunc
  */
 typedef struct WindowFunc
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 6845a40..ccfe66d 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -260,6 +260,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for GroupingFunc fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 082f7d7..2ecda68 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,8 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
+		 int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 3dc8bab..b0f0f19 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 7c243ec..0e4b719 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -324,6 +326,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -342,6 +345,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 91a0706..6a5f9bb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingFunc(ParseState *pstate, GroupingFunc *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index 6a4438f..fdf6732 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index bf69f2a..fdca713 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..fbfb424
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,575 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+ grouping | a |  array_agg  | rank | rank 
+----------+---+-------------+------+------
+        0 | 1 | {1,4,5}     |    1 |    1
+        0 | 3 | {1,2}       |    3 |    3
+        1 |   | {1,4,5,1,2} |    1 |    6
+(3 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 62ef6ec..4d95250 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: brin gin gist spgist privileges security_label collate matview lock replica_identity object_address
+test: brin gin gist spgist privileges security_label collate matview lock replica_identity object_address groupingsets
 
 # rowsecurity creates an event trigger, so don't run it in parallel
 test: rowsecurity
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index b491b97..d5b0498 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..aebcbbb
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,153 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
+-- end
#107Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#84)
1 attachment(s)
Re: Final Patch for GROUPING SETS

Updated patch (mostly just conflict resolution):

- fix explain code to track changes to deparse context handling

- tiny expansion of some comments (clarify in nodeAgg header
comment that aggcontexts are now EContexts rather than just
memory contexts)

- declare support for features in sql_features.txt, which had been
previously overlooked

--
Andrew (irc:RhodiumToad)

Attachments:

gsp-all.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 2629bfc..543f3af 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2330,6 +2331,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) expr->aggfilter);
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grpnode = (GroupingFunc *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2607,6 +2615,12 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) lfirst(temp));
 			}
 			break;
+		case T_IntList:
+			foreach(temp, (List *) node)
+			{
+				APP_JUMB(lfirst_int(temp));
+			}
+			break;
 		case T_SortGroupClause:
 			{
 				SortGroupClause *sgc = (SortGroupClause *) node;
@@ -2617,6 +2631,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				APP_JUMB(sgc->nulls_first);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
 		case T_WindowClause:
 			{
 				WindowClause *wc = (WindowClause *) node;
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d57243a..4fb1bb4 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12063,7 +12063,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13161,6 +13163,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 7dbad46..56419c7 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1183,6 +1183,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 01d24a5..d2df959 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -621,23 +630,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group.
     (If there are aggregate functions but no <literal>GROUP BY</literal>
diff --git a/src/backend/catalog/sql_features.txt b/src/backend/catalog/sql_features.txt
index 3329264..db6a385 100644
--- a/src/backend/catalog/sql_features.txt
+++ b/src/backend/catalog/sql_features.txt
@@ -467,9 +467,9 @@ T331	Basic roles			YES
 T332	Extended roles			NO	mostly supported
 T341	Overloading of SQL-invoked functions and procedures			YES	
 T351	Bracketed SQL comments (/*...*/ comments)			YES	
-T431	Extended grouping capabilities			NO	
-T432	Nested and concatenated GROUPING SETS			NO	
-T433	Multiargument GROUPING function			NO	
+T431	Extended grouping capabilities			YES	
+T432	Nested and concatenated GROUPING SETS			YES	
+T433	Multiargument GROUPING function			YES	
 T434	GROUP BY DISTINCT			NO	
 T441	ABS and MOD functions			YES	
 T461	Symmetric BETWEEN predicate			YES	
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 7cfc9bb..30c55ae 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -82,6 +82,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -979,6 +982,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
@@ -1817,18 +1824,78 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 NULL, NULL, NULL,
-							 ancestors, es);
+
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 NULL, NULL, NULL,
+								 ancestors, es);
+
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	bool		useprefix;
+	char	   *exprstr;
+	ListCell   *lc;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = set_deparse_context_planstate(es->deparse_cxt,
+											(Node *) planstate,
+											ancestors);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2454,6 +2521,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 0e7400f..d4fb054 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -75,6 +75,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -182,6 +184,9 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						ExprContext *econtext,
+						bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -569,6 +574,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -646,8 +653,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -695,6 +718,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -3024,6 +3072,44 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingFuncExpr
+ *
+ * Return a bitmask with a bit for each (unevaluated) argument expression
+ * (rightmost arg is least significant bit).
+ *
+ * A bit is set if the corresponding expression is NOT part of the set of
+ * grouping expressions in the current grouping set.
+ */
+
+static Datum
+ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						 ExprContext *econtext,
+						 bool *isNull,
+						 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int attnum = 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		attnum = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4423,6 +4509,11 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
@@ -4491,6 +4582,27 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state = (ExprState *) astate;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grp_node = (GroupingFunc *) node;
+				GroupingFuncExprState *grp_state = makeNode(GroupingFuncExprState);
+				Agg		   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingFuncExpr;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 32697dd..91fa568 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
@@ -651,9 +652,10 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	/*
 	 * Don't examine the arguments or filters of Aggrefs or WindowFuncs,
 	 * because those do not represent expressions to be evaluated within the
-	 * overall targetlist's econtext.
+	 * overall targetlist's econtext.  GroupingFunc arguments are never
+	 * evaluated at all.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 8079d97..f00285d 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -45,15 +45,19 @@
  *	  needed to allow resolution of a polymorphic aggregate's result type.
  *
  *	  We compute aggregate input expressions and run the transition functions
- *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at
- *	  least once per input tuple, so when the transvalue datatype is
+ *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at least
+ *	  once per input tuple, so when the transvalue datatype is
  *	  pass-by-reference, we have to be careful to copy it into a longer-lived
- *	  memory context, and free the prior value to avoid memory leakage.
- *	  We store transvalues in the memory context aggstate->aggcontext,
- *	  which is also used for the hashtable structures in AGG_HASHED mode.
- *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext)
- *	  is used to run finalize functions and compute the output tuple;
- *	  this context can be reset once per output tuple.
+ *	  memory context, and free the prior value to avoid memory leakage.  We
+ *	  store transvalues in another set of econtexts, aggstate->aggcontexts (one
+ *	  per grouping set, see below), which are also used for the hashtable
+ *	  structures in AGG_HASHED mode.  These econtexts are rescanned, not just
+ *	  reset, at group boundaries so that aggregate transition functions can
+ *	  register shutdown callbacks via AggRegisterCallback.
+ *
+ *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext) is used to
+ *	  run finalize functions and compute the output tuple; this context can be
+ *	  reset once per output tuple.
  *
  *	  The executor's AggState node is passed as the fmgr "context" value in
  *	  all transfunc and finalfunc calls.  It is not recommended that the
@@ -84,6 +88,48 @@
  *	  need some fallback logic to use this, since there's no Aggref node
  *	  for a window function.)
  *
+ *	  Grouping sets:
+ *
+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.
+ *
+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.
+ *
+ *	  (Because the AGG_CHAINED node does not project aggregate values into the
+ *	  main executor path, its targetlist and qual are dummy, and it gets the
+ *	  real aggregate targetlist and qual from the chain head node.)
+ *
+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.
+ *
+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.
+ *
+ *	  TODO: AGG_HASHED doesn't support multiple grouping sets yet.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -241,9 +287,11 @@ typedef struct AggStatePerAggData
 	 * then at completion of the input tuple group, we scan the sorted values,
 	 * eliminate duplicates if needed, and run the transition function on the
 	 * rest.
+	 *
+	 * We need a separate tuplesort for each grouping set.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstates;	/* sort objects, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +352,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReset);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -325,6 +374,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -333,90 +383,109 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 /*
  * Initialize all aggregates for a new group of input values.
  *
+ * If there are multiple grouping sets, we initialize only the first numReset
+ * of them (the grouping sets are ordered so that the most specific one, which
+ * is reset most often, is first). As a convenience, if numReset is < 1, we
+ * reinitialize all sets.
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReset)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         setno = 0;
+
+	if (numReset < 1)
+		numReset = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (setno = 0; setno < numReset; setno++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstates[setno])
+					tuplesort_end(peraggstate->sortstates[setno]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 *
-			 * In the future, we should consider forcing the
-			 * tuplesort_begin_heap() case when the abbreviated key
-			 * optimization can thereby be used, even when numInputs is 1.
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 *
+				 * In the future, we should consider forcing the
+				 * tuplesort_begin_heap() case when the abbreviated key
+				 * optimization can thereby be used, even when numInputs is 1.
+				 */
+				peraggstate->sortstates[setno] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
-		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+		for (setno = 0; setno < numReset; setno++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
+
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[setno]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
 		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
-
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -459,7 +528,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -507,7 +576,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -534,11 +603,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -582,13 +653,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
 		}
 		else
 		{
@@ -604,7 +678,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * numAggs)];
+
+				aggstate->current_set = setno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -627,6 +708,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -646,7 +730,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -658,7 +742,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstates[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -702,8 +786,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
@@ -713,6 +797,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -729,13 +816,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstates[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -783,13 +870,16 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
  * Compute the final value of one aggregate.
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * The finalfunction will be run, and the result delivered, in the
  * output-tuple context; caller's CurrentMemoryContext does not matter.
  */
@@ -836,7 +926,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -920,7 +1010,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -950,7 +1041,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontexts[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1061,7 +1152,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1083,6 +1174,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1090,7 +1183,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1101,22 +1193,48 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
+	if (!node->agg_done)
+	{
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
+	}
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	/*
+	 * We've completed all locally computed projections, now we drain the side
+	 * channel of projections from chained nodes if any.
+	 */
+	if (!node->chain_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	return NULL;
 }
 
 /*
@@ -1136,6 +1254,12 @@ agg_retrieve_direct(AggState *aggstate)
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
 	int			aggno;
+	bool		hasGroupingSets = aggstate->numsets > 0;
+	int			numGroupingSets = Max(aggstate->numsets, 1);
+	int			currentSet = 0;
+	int			nextSetSize = 0;
+	int			numReset = 1;
+	int			i;
 
 	/*
 	 * get state info from node
@@ -1154,35 +1278,15 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
 		 * group).  We also clear any child contexts of the aggcontext; some
@@ -1195,90 +1299,223 @@ agg_retrieve_direct(AggState *aggstate)
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		/*
+		 * Determine how many grouping sets need to be reset at this boundary.
+		 */
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontexts[i]);
+			MemoryContextDeleteChildren(aggstate->aggcontexts[i]->ecxt_per_tuple_memory);
+		}
+
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
 
 		/*
-		 * Initialize working state for a new input tuple group
+		 * Get the number of columns in the next grouping set after the last
+		 * projected one (if any). This is the number of columns to compare to
+		 * see if we reached the boundary of that set too.
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			nextSetSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			nextSetSize = 0;
 
-		if (aggstate->grp_firstTuple != NULL)
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
+		 */
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& nextSetSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									nextSetSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			aggstate->projected_set += 1;
+
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(nextSetSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * We no longer care what group we just projected, the next
+			 * projection will always be the first (or only) grouping set
+			 * (unless the input proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch
+			 * it from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
-
-				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
-				 */
-				if (node->aggstrategy == AGG_SORTED)
+				else
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					/* outer plan produced no tuples at all */
+					if (hasGroupingSets)
 					{
 						/*
-						 * Save the first input tuple of the next group.
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
 						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group.
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
+
+				/*
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
+				 */
+				for (;;)
+				{
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
+					{
+						/* no more outer-plan tuples available */
+						if (hasGroupingSets)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentSet = aggstate->projected_set;
+
+		if (hasGroupingSets)
+			econtext->grouped_cols = aggstate->grouped_cols[currentSet];
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1326,6 +1563,175 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that might have
+	 * been generated by previous rows, and if this is not the first row, then
+	 * grp_firsttuple has the representative input row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontexts[currentSet]);
+		MemoryContextDeleteChildren(aggstate->aggcontexts[currentSet]->ecxt_per_tuple_memory);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+
+		/*
+		 * We're out of input, so the calling node has all the data it needs
+		 * and (if it's a Sort) is about to sort it. We preemptively request a
+		 * rescan of our input plan here, so that Sort nodes containing data
+		 * that is no longer needed will free their memory.  The intention here
+		 * is to bound the peak memory requirement for the whole chain to
+		 * 2*work_mem if REWIND was not requested, or 3*work_mem if REWIND was
+		 * requested and we had to supply a Sort node for the original data
+		 * source plan.
+		 */
+
+		ExecReScan(outerPlanState(aggstate));
+
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	/* Actually accumulate the current tuple. */
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1493,12 +1899,17 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int			numGroupingSets = 1;
+	int			currentsortno = 0;
+	int			i = 0;
+	int			j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1512,40 +1923,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_eflags = eflags & EXEC_FLAG_REWIND;
+	aggstate->chain_top = false;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
+
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontexts = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts (aggcontexts) replaces the standalone
+	 * memory context formerly used to hold transition values.  We cheat a
+	 * little by using ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontexts and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
-
-	/*
 	 * tuple table initialization
 	 */
 	ExecInitScanTupleSlot(estate, &aggstate->ss);
@@ -1561,24 +2010,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		AggState   *chain_head = estate->agg_chain_head;
+		Agg		   *chain_head_plan;
+
+		Assert(chain_head);
+
+		aggstate->chain_head = chain_head;
+		chain_head->chain_depth++;
+
+		chain_head_plan = (Agg *) chain_head->ss.ps.plan;
+
+		/*
+		 * If we reached the originally declared depth, we must be the "top"
+		 * (furthest from plan root) node in the chain.
+		 */
+		if (chain_head_plan->chain_depth == chain_head->chain_depth)
+			aggstate->chain_top = true;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_depth > 0)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
-	 * initialize child nodes
+	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
+	 *
+	 * If we have more than one associated ChainAggregate node, then we turn
+	 * off REWIND and restore it in the chain top, so that the intermediate
+	 * Sort nodes will discard their data on rescan.  This lets us put an upper
+	 * bound on the memory usage, even when we have a long chain of sorts (at
+	 * the cost of having to re-sort on rewind, which is why we don't do it
+	 * for only one node where no memory would be saved).
 	 */
-	if (node->aggstrategy == AGG_HASHED)
+	if (aggstate->chain_top)
+		eflags |= aggstate->chain_head->chain_eflags;
+	else if (node->aggstrategy == AGG_HASHED || node->chain_depth > 1)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_depth > 0)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1587,8 +2090,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -1649,7 +2179,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
+											  * numaggs
+											  * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1712,7 +2245,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstates = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstates[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2020,31 +2556,38 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			setno;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			if (peraggstate->sortstates[setno])
+				tuplesort_end(peraggstate->sortstates[setno]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (setno = 0; setno < numGroupingSets; setno++)
+		ReScanExprContext(node->aggcontexts[setno]);
+
+	if (node->chain_tuplestore && node->chain_depth > 0)
+		tuplestore_end(node->chain_tuplestore);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2053,13 +2596,16 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         setno;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2085,14 +2631,35 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstates[setno])
+			{
+				tuplesort_end(peraggstate->sortstates[setno]);
+				peraggstate->sortstates[setno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them.
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. We're going to rebuild the hash table from scratch,
+	 * so we need to use MemoryContextDeleteChildren() to avoid leaking the old
+	 * hash table's memory context header. (ReScanExprContext does the actual
+	 * reset, but it doesn't delete child contexts.)
+	 */
+
+	for (setno = 0; setno < numGroupingSets; setno++)
+	{
+		ReScanExprContext(node->aggcontexts[setno]);
+		MemoryContextDeleteChildren(node->aggcontexts[setno]->ecxt_per_tuple_memory);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2100,21 +2667,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2126,15 +2685,54 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed.  Since we're preempting the rescan
+	 * in some cases, we only check whether any ChainAgg node was reached in
+	 * the rescan; the others may have already been reset.
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_depth > 0)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan > 0)
+				tuplestore_clear(node->chain_tuplestore);
+			else
+				tuplestore_rescan(node->chain_tuplestore);
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
@@ -2154,8 +2752,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2163,7 +2764,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2247,8 +2852,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index f1a24f5..a9c679d 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -802,6 +802,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_depth);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
@@ -809,6 +810,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1095,6 +1097,27 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -1177,6 +1200,23 @@ _copyAggref(const Aggref *from)
 }
 
 /*
+ * _copyGroupingFunc
+ */
+static GroupingFunc *
+_copyGroupingFunc(const GroupingFunc *from)
+{
+	GroupingFunc	   *newnode = makeNode(GroupingFunc);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_SCALAR_FIELD(agglevelsup);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyWindowFunc
  */
 static WindowFunc *
@@ -2076,6 +2116,18 @@ _copySortGroupClause(const SortGroupClause *from)
 	return newnode;
 }
 
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
 static WindowClause *
 _copyWindowClause(const WindowClause *from)
 {
@@ -2526,6 +2578,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4142,6 +4195,9 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
@@ -4151,6 +4207,9 @@ copyObject(const void *from)
 		case T_Aggref:
 			retval = _copyAggref(from);
 			break;
+		case T_GroupingFunc:
+			retval = _copyGroupingFunc(from);
+			break;
 		case T_WindowFunc:
 			retval = _copyWindowFunc(from);
 			break;
@@ -4711,6 +4770,9 @@ copyObject(const void *from)
 		case T_SortGroupClause:
 			retval = _copySortGroupClause(from);
 			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_WindowClause:
 			retval = _copyWindowClause(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 6e8b308..1eb35d2 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,22 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -208,6 +224,21 @@ _equalAggref(const Aggref *a, const Aggref *b)
 }
 
 static bool
+_equalGroupingFunc(const GroupingFunc *a, const GroupingFunc *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_SCALAR_FIELD(agglevelsup);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowFunc(const WindowFunc *a, const WindowFunc *b)
 {
 	COMPARE_SCALAR_FIELD(winfnoid);
@@ -865,6 +896,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2388,6 +2420,16 @@ _equalSortGroupClause(const SortGroupClause *a, const SortGroupClause *b)
 }
 
 static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowClause(const WindowClause *a, const WindowClause *b)
 {
 	COMPARE_STRING_FIELD(name);
@@ -2582,6 +2624,9 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
@@ -2591,6 +2636,9 @@ equal(const void *a, const void *b)
 		case T_Aggref:
 			retval = _equalAggref(a, b);
 			break;
+		case T_GroupingFunc:
+			retval = _equalGroupingFunc(a, b);
+			break;
 		case T_WindowFunc:
 			retval = _equalWindowFunc(a, b);
 			break;
@@ -3138,6 +3186,9 @@ equal(const void *a, const void *b)
 		case T_SortGroupClause:
 			retval = _equalSortGroupClause(a, b);
 			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_WindowClause:
 			retval = _equalWindowClause(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 94cab47..a6737514 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 6fdf44d..a9b58eb 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 21dfda7..0084eb0 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,9 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -54,6 +57,9 @@ exprType(const Node *expr)
 		case T_Aggref:
 			type = ((const Aggref *) expr)->aggtype;
 			break;
+		case T_GroupingFunc:
+			type = INT4OID;
+			break;
 		case T_WindowFunc:
 			type = ((const WindowFunc *) expr)->wintype;
 			break;
@@ -261,6 +267,8 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +742,9 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -743,6 +754,9 @@ exprCollation(const Node *expr)
 		case T_Aggref:
 			coll = ((const Aggref *) expr)->aggcollid;
 			break;
+		case T_GroupingFunc:
+			coll = InvalidOid;
+			break;
 		case T_WindowFunc:
 			coll = ((const WindowFunc *) expr)->wincollid;
 			break;
@@ -967,6 +981,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -976,6 +993,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Aggref:
 			((Aggref *) expr)->aggcollid = collation;
 			break;
+		case T_GroupingFunc:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_WindowFunc:
 			((WindowFunc *) expr)->wincollid = collation;
 			break;
@@ -1182,6 +1202,9 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1192,6 +1215,9 @@ exprLocation(const Node *expr)
 			/* function name should always be the first thing */
 			loc = ((const Aggref *) expr)->location;
 			break;
+		case T_GroupingFunc:
+			loc = ((const GroupingFunc *) expr)->location;
+			break;
 		case T_WindowFunc:
 			/* function name should always be the first thing */
 			loc = ((const WindowFunc *) expr)->location;
@@ -1471,6 +1497,9 @@ exprLocation(const Node *expr)
 			/* XMLSERIALIZE keyword should always be the first thing */
 			loc = ((const XmlSerialize *) expr)->location;
 			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_WithClause:
 			loc = ((const WithClause *) expr)->location;
 			break;
@@ -1622,6 +1651,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1655,6 +1685,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grouping = (GroupingFunc *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2144,6 +2183,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2185,6 +2233,29 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc   *grouping = (GroupingFunc *) node;
+				GroupingFunc   *newnode;
+
+				FLATCOPY(newnode, grouping, GroupingFunc);
+				MUTATE(newnode->args, grouping->args, List *);
+
+				/*
+				 * We assume here that mutating the arguments does not change
+				 * the semantics, i.e. that the arguments are not mutated in a
+				 * way that makes them semantically different from their
+				 * previously matching expressions in the GROUP BY clause.
+				 *
+				 * If a mutator somehow wanted to do this, it would have to
+				 * handle the refs and cols lists itself as appropriate.
+				 */
+				newnode->refs = list_copy(grouping->refs);
+				newnode->cols = list_copy(grouping->cols);
+
+				return (Node *) newnode;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
@@ -2870,6 +2941,8 @@ raw_expression_tree_walker(Node *node,
 			break;
 		case T_RangeVar:
 			return walker(((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return walker(((GroupingFunc *) node)->args, context);
 		case T_SubLink:
 			{
 				SubLink    *sublink = (SubLink *) node;
@@ -3193,6 +3266,8 @@ raw_expression_tree_walker(Node *node,
 				/* for now, constraints are ignored */
 			}
 			break;
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		case T_LockingClause:
 			return walker(((LockingClause *) node)->lockedRels, context);
 		case T_XmlSerialize:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index dd1278b..c94c952 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -646,6 +646,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_INT_FIELD(chain_depth);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
@@ -657,6 +658,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -926,6 +929,22 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -980,6 +999,18 @@ _outAggref(StringInfo str, const Aggref *node)
 }
 
 static void
+_outGroupingFunc(StringInfo str, const GroupingFunc *node)
+{
+	WRITE_NODE_TYPE("GROUPINGFUNC");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_INT_FIELD(agglevelsup);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowFunc(StringInfo str, const WindowFunc *node)
 {
 	WRITE_NODE_TYPE("WINDOWFUNC");
@@ -2303,6 +2334,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2337,6 +2369,16 @@ _outSortGroupClause(StringInfo str, const SortGroupClause *node)
 }
 
 static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowClause(StringInfo str, const WindowClause *node)
 {
 	WRITE_NODE_TYPE("WINDOWCLAUSE");
@@ -2950,6 +2992,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
@@ -2959,6 +3004,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Aggref:
 				_outAggref(str, obj);
 				break;
+			case T_GroupingFunc:
+				_outGroupingFunc(str, obj);
+				break;
 			case T_WindowFunc:
 				_outWindowFunc(str, obj);
 				break;
@@ -3216,6 +3264,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_SortGroupClause:
 				_outSortGroupClause(str, obj);
 				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_WindowClause:
 				_outWindowClause(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index ae24d05..4b9f29d 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -216,6 +216,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -291,6 +292,21 @@ _readSortGroupClause(void)
 }
 
 /*
+ * _readGroupingSet
+ */
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowClause
  */
 static WindowClause *
@@ -441,6 +457,27 @@ _readVar(void)
 }
 
 /*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readConst
  */
 static Const *
@@ -510,6 +547,23 @@ _readAggref(void)
 }
 
 /*
+ * _readGroupingFunc
+ */
+static GroupingFunc *
+_readGroupingFunc(void)
+{
+	READ_LOCALS(GroupingFunc);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_INT_FIELD(agglevelsup);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowFunc
  */
 static WindowFunc *
@@ -1305,6 +1359,8 @@ parseNodeString(void)
 		return_value = _readWithCheckOption();
 	else if (MATCH("SORTGROUPCLAUSE", 15))
 		return_value = _readSortGroupClause();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("WINDOWCLAUSE", 12))
 		return_value = _readWindowClause();
 	else if (MATCH("ROWMARKCLAUSE", 13))
@@ -1321,12 +1377,16 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
 		return_value = _readParam();
 	else if (MATCH("AGGREF", 6))
 		return_value = _readAggref();
+	else if (MATCH("GROUPINGFUNC", 12))
+		return_value = _readGroupingFunc();
 	else if (MATCH("WINDOWFUNC", 10))
 		return_value = _readWindowFunc();
 	else if (MATCH("ARRAYREF", 8))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 58d78e6..2c05f71 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1241,6 +1241,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2099,7 +2100,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 11d3933..fa1de6a 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -581,6 +581,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -649,10 +650,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -668,6 +669,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 655be81..e5945f9 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1029,6 +1029,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
+								 NULL,
 								 numGroups,
 								 subplan);
 	}
@@ -4357,6 +4359,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets, int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4366,6 +4369,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_depth = chain_depth_p ? *chain_depth_p : 0;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4386,10 +4390,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
@@ -4408,8 +4414,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_depth_p);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index b90c2ef..7d1ea47 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 9cbbcfb..2e69fcb 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,12 +16,14 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +39,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -65,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -77,7 +81,9 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -317,6 +323,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -533,7 +541,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1176,11 +1185,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1189,15 +1193,90 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
 		if (parse->groupClause)
-			preprocess_groupclause(root);
+		{
+			ListCell   *lc;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+		}
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
+
+			foreach(lc_set, sets)
+			{
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
+
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1270,6 +1349,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1281,6 +1361,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1307,15 +1388,46 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
 		{
 			List	   *groupExprs;
 
-			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
-												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc,
+						   *lc2;
+
+				dNumGroups = 0;
+
+				forboth(lc, rollup_groupclauses, lc2, rollup_lists)
+				{
+					ListCell   *lc3;
+
+					groupExprs = get_sortgrouplist_exprs(lfirst(lc),
+														 parse->targetList);
+
+					foreach(lc3, lfirst(lc2))
+					{
+						List   *gset = lfirst(lc3);
+
+						dNumGroups += estimate_num_groups(root,
+														  groupExprs,
+														  path_rows,
+														  &gset);
+					}
+				}
+			}
+			else
+			{
+				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
+													 parse->targetList);
+
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
+			}
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1326,6 +1438,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1341,14 +1456,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
-			 * and it will deliver a single result row (so leave dNumGroups
-			 * set to 1).
+			 * and it will deliver a single result row per grouping set (or 1
+			 * if no grouping sets were explicitly given, in which case leave
+			 * dNumGroups as-is)
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1363,7 +1481,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1446,13 +1564,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping.
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1518,7 +1647,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1593,52 +1722,118 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
+												NIL,
+												NULL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				int			chain_depth = 0;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						if (gsets)
+						{
+							Assert(refmap);
+
+							/*
+							 * need to remap groupColIdx, which has the column
+							 * indices for every clause in parse->groupClause
+							 * indexed by list position, to a local version for
+							 * this node which lists only the clauses included in
+							 * groupClause by position in that list. The refmap for
+							 * this node (indexed by sortgroupref) contains 0 for
+							 * clauses not present in this node's groupClause.
+							 */
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+
+							/*
+							 * If there aren't any other chained aggregates, then
+							 * we didn't disturb the originally required input
+							 * sort order.
+							 */
+							if (chain_depth == 0)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
-					current_pathkeys = NIL;
-				}
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													(aggstrategy != AGG_CHAINED) ? &chain_depth : NULL,
+													numGroups,
+													result_plan);
+
+					chain_depth += 1;
 
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												numGroups,
-												result_plan);
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
+				}
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -1669,27 +1864,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1852,7 +2086,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1893,6 +2127,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
+											NULL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2526,19 +2762,38 @@ limit_needed(Query *parse)
  *
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
+ *
+ * For grouping sets, the order of items is instead forced to agree with that
+ * of the grouping set (and items not in the grouping set are skipped). The
+ * work of sorting the order of grouping set elements to match the ORDER BY if
+ * possible is done elsewhere.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2546,7 +2801,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2570,7 +2824,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2587,15 +2841,446 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
+ *
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.  This implementation is based on pseudocode
+ * found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
+ *
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets)
+{
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
+	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
+
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
+	{
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
+
+		foreach(lc2, candidate)
+		{
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
+			{
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
+			}
+		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
+
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
+		{
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
+		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
+		else
+			chains[i] = ++num_chains;
+	}
+
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
+	/*
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
+	 */
+
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
+
+	foreach(lc, groupingsets)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					/* diverged from the sortclause; give up on it */
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+
+	return result;
 }
 
 /*
@@ -2614,11 +3299,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
@@ -3043,7 +3728,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 7703946..6aa9fc1 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -68,6 +68,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -134,6 +140,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -661,6 +669,17 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1073,6 +1092,7 @@ copyVar(Var *var)
  * We must look up operator opcode info for OpExpr and related nodes,
  * add OIDs from regclass Const nodes into root->glob->relationOids, and
  * add catalog TIDs for user-defined functions into root->glob->invalItems.
+ * We also fill in column index lists for GROUPING() expressions.
  *
  * We assume it's okay to update opcode info in-place.  So this could possibly
  * scribble on the planner's input data structures, but it's OK.
@@ -1136,6 +1156,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *g = (GroupingFunc *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1263,6 +1308,98 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
+	context.root = root;
+
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref))
+	{
+		/*
+		 * don't recurse into the arguments or filter of Aggrefs, since they
+		 * see the values prior to grouping.  But do recurse into direct args
+		 * if any.
+		 */
+
+		if (((Aggref *)node)->aggdirectargs != NIL)
+		{
+			Aggref *newnode = palloc(sizeof(Aggref));
+
+			memcpy(newnode, node, sizeof(Aggref));
+
+			newnode->aggdirectargs
+				= (List *) expression_tree_mutator((Node *) newnode->aggdirectargs,
+												   set_group_vars_mutator,
+												   (void *) context);
+
+			return (Node *) newnode;
+		}
+
+		return node;
+	}
+	else if (IsA(node, GroupingFunc))
+	{
+		/*
+		 * GroupingFuncs don't see the values at all.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 78fb6b1..690407c 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -336,6 +337,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given GroupingFunc expression which is
+ * expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the GroupingFunc belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (GroupingFunc *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,14 +1533,16 @@ simplify_EXISTS_query(PlannerInfo *root, Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, OFFSET, or FOR UPDATE/SHARE; none
-	 * of these seem likely in normal usage and their possible effects are
-	 * complex.  (Note: we could ignore an "OFFSET 0" clause, but that
-	 * traditionally is used as an optimization fence, so we don't.)
+	 * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.  (Note: we could ignore an "OFFSET 0"
+	 * clause, but that traditionally is used as an optimization fence, so we
+	 * don't.)
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1847,6 +1892,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (GroupingFunc *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
@@ -2077,7 +2127,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2128,7 +2178,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2343,7 +2393,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2359,7 +2410,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2375,7 +2427,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2391,7 +2444,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2407,7 +2461,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2474,8 +2529,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_depth > 0)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2492,7 +2569,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2501,7 +2579,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2512,7 +2591,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 8a0199b..00ae12c 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 05f601e..01d1af7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,8 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
+								 NULL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index b340b01..08f52c8 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4304,6 +4304,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 1395a21..e88f728 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index a1a504b..f702b8c 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index 8f86432..0f25539 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index a68f2e8..fe93b87 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -964,6 +964,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1010,7 +1011,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1470,7 +1471,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 36dac29..a19a568 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -366,6 +366,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -431,7 +435,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -553,7 +557,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -568,7 +572,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -602,11 +606,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -664,6 +668,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -674,7 +683,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -10136,11 +10145,79 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11709,15 +11786,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  GroupingFunc *g = makeNode(GroupingFunc);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12467,6 +12562,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13196,6 +13298,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13344,6 +13447,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13358,6 +13462,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13441,6 +13546,7 @@ col_name_keyword:
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 7b0e668..19391d0 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingFunc
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingFunc(ParseState *pstate, GroupingFunc *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	GroupingFunc *result = makeNode(GroupingFunc);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		GroupingFunc *grp = (GroupingFunc *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		int			agglevelsup = ((GroupingFunc *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,67 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = linitial(gsets);
+
+		if (gset_common)
+		{
+			for_each_cell(l, lnext(list_head(gsets)))
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1006,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1040,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1072,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1132,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1196,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/* handled GroupingFunc separately, no need to recheck at this level */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1217,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1246,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1285,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1330,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/*
+		 * We only need to check GroupingFunc nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_asc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, shortest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 654dce6..126699a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1663,40 +1664,182 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
+/*
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
+ *
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
+ *
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
+ */
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
+{
+	/* just in case of pathological input */
+	check_stack_depth();
+
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
+	{
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
+
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
+
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Transform a single expression within a GROUP BY clause or grouping set.
+ *
+ * The expression is added to the targetlist if not already present, and to the
+ * flatresult list (which will become the groupClause) if not already present
+ * there.  The sortClause is consulted for operator and sort order hints.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Returns the ressortgroupref of the expression.
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * seen_local	bitmapset of sortgrouprefs already seen at the local level
+ * pstate		ParseState
+ * gexpr		node to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	TargetEntry *tle;
+	bool		found = false;
 
-	foreach(gl, grouplist)
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
-
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
-
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1708,35 +1851,308 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
+
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+/*
+ * Transform a list of expressions within a GROUP BY clause or grouping set.
+ *
+ * The list of expressions belongs to a single clause within which duplicates
+ * can be safely eliminated.
+ *
+ * Returns an integer list of ressortgroupref values.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * list			nodes to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Transform a grouping set and (recursively) its content.
+ *
+ * The grouping set might be a GROUPING SETS node with other grouping sets
+ * inside it, but SETS within SETS have already been flattened out before
+ * reaching here.
+ *
+ * Returns the transformed node, which now contains SIMPLE nodes with lists
+ * of ressortgrouprefs rather than expressions.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * gset			grouping set to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ *
+ * Returns the transformed (flat) groupClause.
+ *
+ * pstate		ParseState
+ * grouplist	clause to transform
+ * groupingSets	reference to list to contain the grouping set tree
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1841,6 +2257,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index f0f0488..dea74cc 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -261,6 +262,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 			result = transformMultiAssignRef(pstate, (MultiAssignRef *) expr);
 			break;
 
+		case T_GroupingFunc:
+			result = transformGroupingFunc(pstate, (GroupingFunc *) expr);
+			break;
+
 		case T_NamedArgExpr:
 			{
 				NamedArgExpr *na = (NamedArgExpr *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 3724330..7125b76 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1675,6 +1675,10 @@ FigureColnameInternal(Node *node, char **name)
 			break;
 		case T_CollateClause:
 			return FigureColnameInternal(((CollateClause *) node)->arg, name);
+		case T_GroupingFunc:
+			/* make GROUPING() act like a regular function */
+			*name = "grouping";
+			return 2;
 		case T_SubLink:
 			switch (((SubLink *) node)->subLinkType)
 			{
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index b8e6e7a..6e82a6b 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2109,7 +2109,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index 75dd41e..3ab8f18 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,12 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up)
+			return true;
+		/* else fall through to examine argument */
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +163,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up &&
+			((GroupingFunc *) node)->location >= 0)
+		{
+			context->agg_location = ((GroupingFunc *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -703,6 +718,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc   *grp = (GroupingFunc *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index c1d860c..7e55778 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -42,6 +42,7 @@
 #include "nodes/nodeFuncs.h"
 #include "optimizer/tlist.h"
 #include "parser/keywords.h"
+#include "parser/parse_node.h"
 #include "parser/parse_agg.h"
 #include "parser/parse_func.h"
 #include "parser/parse_oper.h"
@@ -103,6 +104,8 @@ typedef struct
 	int			wrapColumn;		/* max line length, or -1 for no limit */
 	int			indentLevel;	/* current indent level for prettyprint */
 	bool		varprefix;		/* TRUE to print prefixes on Vars */
+	ParseExprKind special_exprkind;	/* set only for exprkinds needing */
+									/* special handling */
 } deparse_context;
 
 /*
@@ -361,9 +364,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -411,8 +416,9 @@ static void printSubscripts(ArrayRef *aref, deparse_context *context);
 static char *get_relation_name(Oid relid);
 static char *generate_relation_name(Oid relid, List *namespaces);
 static char *generate_function_name(Oid funcid, int nargs,
-					   List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p);
+							List *argnames, Oid *argtypes,
+							bool has_variadic, bool *use_variadic_p,
+							ParseExprKind special_exprkind);
 static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2);
 static text *string_to_text(char *str);
 static char *flatten_reloptions(Oid relid);
@@ -870,6 +876,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 		context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		get_rule_expr(qual, &context, false);
 
@@ -879,7 +886,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 	appendStringInfo(&buf, "EXECUTE PROCEDURE %s(",
 					 generate_function_name(trigrec->tgfoid, 0,
 											NIL, NULL,
-											false, NULL));
+											false, NULL, EXPR_KIND_NONE));
 
 	if (trigrec->tgnargs > 0)
 	{
@@ -2476,6 +2483,7 @@ deparse_expression_pretty(Node *expr, List *dpcontext,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = WRAP_COLUMN_DEFAULT;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	get_rule_expr(expr, &context, showimplicit);
 
@@ -4073,6 +4081,7 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc,
 		context.prettyFlags = prettyFlags;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		set_deparse_for_query(&dpns, query, NIL);
 
@@ -4224,6 +4233,7 @@ get_query_def(Query *query, StringInfo buf, List *parentnamespace,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = wrapColumn;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	set_deparse_for_query(&dpns, query, parentnamespace);
 
@@ -4565,7 +4575,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4590,20 +4600,43 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
+		ParseExprKind	save_exprkind;
+
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		save_exprkind = context->special_exprkind;
+		context->special_exprkind = EXPR_KIND_GROUP_BY;
+
+		if (query->groupingSets == NIL)
+		{
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
+
+		context->special_exprkind = save_exprkind;
 	}
 
 	/* Add the HAVING clause if given */
@@ -4670,7 +4703,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4889,23 +4922,24 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4914,13 +4948,92 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4940,7 +5053,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5040,7 +5153,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5589,10 +5702,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5614,10 +5727,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5637,10 +5750,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5680,10 +5793,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6714,6 +6827,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -6726,6 +6843,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			get_agg_expr((Aggref *) node, context);
 			break;
 
+		case T_GroupingFunc:
+			{
+				GroupingFunc *gexpr = (GroupingFunc *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_WindowFunc:
 			get_windowfunc_expr((WindowFunc *) node, context);
 			break;
@@ -7764,7 +7891,8 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
 					 generate_function_name(funcoid, nargs,
 											argnames, argtypes,
 											expr->funcvariadic,
-											&use_variadic));
+											&use_variadic,
+											context->special_exprkind));
 	nargs = 0;
 	foreach(l, expr->args)
 	{
@@ -7796,7 +7924,8 @@ get_agg_expr(Aggref *aggref, deparse_context *context)
 					 generate_function_name(aggref->aggfnoid, nargs,
 											NIL, argtypes,
 											aggref->aggvariadic,
-											&use_variadic),
+											&use_variadic,
+											context->special_exprkind),
 					 (aggref->aggdistinct != NIL) ? "DISTINCT " : "");
 
 	if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
@@ -7886,7 +8015,8 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
 	appendStringInfo(buf, "%s(",
 					 generate_function_name(wfunc->winfnoid, nargs,
 											argnames, argtypes,
-											false, NULL));
+											false, NULL,
+											context->special_exprkind));
 	/* winstar can be set only in zero-argument aggregates */
 	if (wfunc->winstar)
 		appendStringInfoChar(buf, '*');
@@ -9116,7 +9246,8 @@ generate_relation_name(Oid relid, List *namespaces)
  */
 static char *
 generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p)
+					   bool has_variadic, bool *use_variadic_p,
+					   ParseExprKind special_exprkind)
 {
 	char	   *result;
 	HeapTuple	proctup;
@@ -9131,6 +9262,7 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	int			p_nvargs;
 	Oid			p_vatype;
 	Oid		   *p_true_typeids;
+	bool		force_qualify = false;
 
 	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
 	if (!HeapTupleIsValid(proctup))
@@ -9139,6 +9271,17 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	proname = NameStr(procform->proname);
 
 	/*
+	 * Thanks to parser hacks to avoid needing to reserve CUBE, we
+	 * need to force qualification in some special cases.
+	 */
+
+	if (special_exprkind == EXPR_KIND_GROUP_BY)
+	{
+		if (strcmp(proname, "cube") == 0 || strcmp(proname, "rollup") == 0)
+			force_qualify = true;
+	}
+
+	/*
 	 * Determine whether VARIADIC should be printed.  We must do this first
 	 * since it affects the lookup rules in func_get_detail().
 	 *
@@ -9169,14 +9312,23 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	/*
 	 * The idea here is to schema-qualify only if the parser would fail to
 	 * resolve the correct function given the unqualified func name with the
-	 * specified argtypes and VARIADIC flag.
+	 * specified argtypes and VARIADIC flag.  But if we already decided to
+	 * force qualification, then we can skip the lookup and pretend we didn't
+	 * find it.
 	 */
-	p_result = func_get_detail(list_make1(makeString(proname)),
-							   NIL, argnames, nargs, argtypes,
-							   !use_variadic, true,
-							   &p_funcid, &p_rettype,
-							   &p_retset, &p_nvargs, &p_vatype,
-							   &p_true_typeids, NULL);
+	if (!force_qualify)
+		p_result = func_get_detail(list_make1(makeString(proname)),
+								   NIL, argnames, nargs, argtypes,
+								   !use_variadic, true,
+								   &p_funcid, &p_rettype,
+								   &p_retset, &p_nvargs, &p_vatype,
+								   &p_true_typeids, NULL);
+	else
+	{
+		p_result = FUNCDETAIL_NOTFOUND;
+		p_funcid = InvalidOid;
+	}
+
 	if ((p_result == FUNCDETAIL_NORMAL ||
 		 p_result == FUNCDETAIL_AGGREGATE ||
 		 p_result == FUNCDETAIL_WINDOWFUNC) &&
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 1ba103c..ceb7663 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index c9f7223..4df44d0 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -83,6 +83,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 41288ed..052ea0a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -407,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -595,6 +602,21 @@ typedef struct AggrefExprState
 } AggrefExprState;
 
 /* ----------------
+ *		GroupingFuncExprState node
+ *
+ * The list of column numbers refers to the input tuples of the Agg node to
+ * which the GroupingFunc belongs, and may contain 0 for references to columns
+ * that are only present in grouping sets processed by different Agg nodes (and
+ * which are therefore always considered "grouping" here).
+ * ----------------
+ */
+typedef struct GroupingFuncExprState
+{
+	ExprState	xprstate;
+	List	   *clauses;		/* integer list of column numbers */
+} GroupingFuncExprState;
+
+/* ----------------
  *		WindowFuncExprState node
  * ----------------
  */
@@ -1742,19 +1764,27 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontexts;	/* econtexts for long-lived data (per GS) */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
@@ -1764,6 +1794,12 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	int			chain_eflags;	/* saved eflags for rewind optimization */
+	bool		chain_top;		/* true for the "top" node in a chain */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 4dff6a0..01d9fed 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 97ef0fc..4d56f50 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -131,9 +131,11 @@ typedef enum NodeTag
 	T_RangeVar,
 	T_Expr,
 	T_Var,
+	T_GroupedVar,
 	T_Const,
 	T_Param,
 	T_Aggref,
+	T_GroupingFunc,
 	T_WindowFunc,
 	T_ArrayRef,
 	T_FuncExpr,
@@ -184,6 +186,7 @@ typedef enum NodeTag
 	T_GenericExprState,
 	T_WholeRowVarExprState,
 	T_AggrefExprState,
+	T_GroupingFuncExprState,
 	T_WindowFuncExprState,
 	T_ArrayRefExprState,
 	T_FuncExprState,
@@ -401,6 +404,7 @@ typedef enum NodeTag
 	T_RangeTblFunction,
 	T_WithCheckOption,
 	T_SortGroupClause,
+	T_GroupingSet,
 	T_WindowClause,
 	T_PrivGrantee,
 	T_FuncWithArgs,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b1dfa85..815a786 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -136,6 +136,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of GroupingSet's if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
@@ -933,6 +935,73 @@ typedef struct SortGroupClause
 } SortGroupClause;
 
 /*
+ * GroupingSet -
+ *		representation of CUBE, ROLLUP and GROUPING SETS clauses
+ *
+ * In a Query with grouping sets, the groupClause contains a flat list of
+ * SortGroupClause nodes for each distinct expression used.  The actual
+ * structure of the GROUP BY clause is given by the groupingSets tree.
+ *
+ * In the raw parser output, GroupingSet nodes (of all types except SIMPLE
+ * which is not used) are potentially mixed in with the expressions in the
+ * groupClause of the SelectStmt.  (An expression can't contain a GroupingSet,
+ * but a list may mix GroupingSet and expression nodes.)  At this stage, the
+ * content of each node is a list of expressions, some of which may be RowExprs
+ * which represent sublists rather than actual row constructors, and nested
+ * GroupingSet nodes where legal in the grammar.  The structure directly
+ * reflects the query syntax.
+ *
+ * In parse analysis, the transformed expressions are used to build the tlist
+ * and groupClause list (of SortGroupClause nodes), and the groupingSets tree
+ * is eventually reduced to a fixed format:
+ *
+ * EMPTY nodes represent (), and obviously have no content
+ *
+ * SIMPLE nodes represent a list of one or more expressions to be treated as an
+ * atom by the enclosing structure; the content is an integer list of
+ * ressortgroupref values (see SortGroupClause)
+ *
+ * CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
+ *
+ * SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
+ * parse analysis they cannot contain more SETS nodes; enough of the syntactic
+ * transforms of the spec have been applied that we no longer have arbitrarily
+ * deep nesting (though we still preserve the use of cube/rollup).
+ *
+ * Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
+ * nodes at the leaves), then the groupClause will be empty, but this is still
+ * an aggregation query (similar to using aggs or HAVING without GROUP BY).
+ *
+ * As an example, the following clause:
+ *
+ * GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
+ *
+ * looks like this after raw parsing:
+ *
+ * SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
+ *
+ * and parse analysis converts it to:
+ *
+ * SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
+ */
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	NodeTag		type;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
+/*
  * WindowClause -
  *		transformed representation of WINDOW and OVER clauses
  *
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index a175000..729456d 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 316c9ce..d44ca52 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -655,6 +655,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -662,10 +663,12 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	int			chain_depth;	/* number of associated ChainAggs in tree */
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 1d06f42..2425658 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -160,6 +160,22 @@ typedef struct Var
 } Var;
 
 /*
+ * GroupedVar - expression node representing a variable that might be
+ * involved in a grouping set.
+ *
+ * This is identical to Var node except in execution; when evaluated it
+ * is conditionally NULL depending on the active grouping set.  Vars are
+ * converted to GroupedVars (if needed) only late in planning.
+ *
+ * (Because they appear only late in planning, most code that handles Vars
+ * doesn't need to know about these, either because they don't exist yet or
+ * because optimizations specific to Vars are intentionally not applied to
+ * GroupedVars.)
+ */
+
+typedef Var GroupedVar;
+
+/*
  * Const
  */
 typedef struct Const
@@ -273,6 +289,41 @@ typedef struct Aggref
 } Aggref;
 
 /*
+ * GroupingFunc
+ *
+ * A GroupingFunc is a GROUPING(...) expression, which behaves in many ways
+ * like an aggregate function (e.g. it "belongs" to a specific query level,
+ * which might not be the one immediately containing it), but also differs in
+ * an important respect: it never evaluates its arguments, they merely
+ * designate expressions from the GROUP BY clause of the query level to which
+ * it belongs.
+ *
+ * The spec defines the evaluation of GROUPING() purely by syntactic
+ * replacement, but we make it a real expression for optimization purposes so
+ * that one Agg node can handle multiple grouping sets at once.  Evaluating the
+ * result only needs the column positions to check against the grouping set
+ * being projected.  However, for EXPLAIN to produce meaningful output, we have
+ * to keep the original expressions around, since expression deparse does not
+ * give us any feasible way to get at the GROUP BY clause.
+ *
+ * Also, we treat two GroupingFunc nodes as equal if they have equal arguments
+ * lists and agglevelsup, without comparing the refs and cols annotations.
+ *
+ * In raw parse output we have only the args list; parse analysis fills in the
+ * refs list, and the planner fills in the cols list.
+ */
+typedef struct GroupingFunc
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+	int			location;		/* token location */
+} GroupingFunc;
+
+/*
  * WindowFunc
  */
 typedef struct WindowFunc
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 6845a40..ccfe66d 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -260,6 +260,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for GroupingFunc fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 082f7d7..2ecda68 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,8 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
+		 int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 3dc8bab..b0f0f19 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 7c243ec..0e4b719 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -324,6 +326,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -342,6 +345,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 91a0706..6a5f9bb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingFunc(ParseState *pstate, GroupingFunc *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index 6a4438f..fdf6732 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index bf69f2a..fdca713 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..fbfb424
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,575 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+ grouping | a |  array_agg  | rank | rank 
+----------+---+-------------+------+------
+        0 | 1 | {1,4,5}     |    1 |    1
+        0 | 3 | {1,2}       |    3 |    3
+        1 |   | {1,4,5,1,2} |    1 |    6
+(3 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index e0ae2f2..ef4e16b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address
+test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 7f762bd..3eb633f 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..aebcbbb
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,153 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
+-- end
#108Michael Paquier
michael.paquier@gmail.com
In reply to: Andrew Gierth (#107)
Re: Final Patch for GROUPING SETS

On Wed, Jan 21, 2015 at 6:02 AM, Andrew Gierth <andrew@tao11.riddles.org.uk>
wrote:

Updated patch (mostly just conflict resolution):

- fix explain code to track changes to deparse context handling

- tiny expansion of some comments (clarify in nodeAgg header
comment that aggcontexts are now EContexts rather than just
memory contexts)

- declare support for features in sql_features.txt, which had been
previously overlooked

Patch moved to CF 2015-02.
--
Michael

#109Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#84)
1 attachment(s)
Re: Final Patch for GROUPING SETS

Updated patch:

- updated to latest head

- Removed MemoryContextDeleteChildren calls made redundant by the
recent change to MemoryContextReset

--
Andrew (irc:RhodiumToad)

Attachments:

gsp-all-latest.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 95616b3..f86164d 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2330,6 +2331,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) expr->aggfilter);
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grpnode = (GroupingFunc *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2607,6 +2615,12 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) lfirst(temp));
 			}
 			break;
+		case T_IntList:
+			foreach(temp, (List *) node)
+			{
+				APP_JUMB(lfirst_int(temp));
+			}
+			break;
 		case T_SortGroupClause:
 			{
 				SortGroupClause *sgc = (SortGroupClause *) node;
@@ -2617,6 +2631,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				APP_JUMB(sgc->nulls_first);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
 		case T_WindowClause:
 			{
 				WindowClause *wc = (WindowClause *) node;
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index da2ed67..3371e21 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12069,7 +12069,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13167,6 +13169,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 7dbad46..56419c7 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1183,6 +1183,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 01d24a5..d2df959 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -621,23 +630,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group.
     (If there are aggregate functions but no <literal>GROUP BY</literal>
diff --git a/src/backend/catalog/sql_features.txt b/src/backend/catalog/sql_features.txt
index 3329264..db6a385 100644
--- a/src/backend/catalog/sql_features.txt
+++ b/src/backend/catalog/sql_features.txt
@@ -467,9 +467,9 @@ T331	Basic roles			YES
 T332	Extended roles			NO	mostly supported
 T341	Overloading of SQL-invoked functions and procedures			YES	
 T351	Bracketed SQL comments (/*...*/ comments)			YES	
-T431	Extended grouping capabilities			NO	
-T432	Nested and concatenated GROUPING SETS			NO	
-T433	Multiargument GROUPING function			NO	
+T431	Extended grouping capabilities			YES	
+T432	Nested and concatenated GROUPING SETS			YES	
+T433	Multiargument GROUPING function			YES	
 T434	GROUP BY DISTINCT			NO	
 T441	ABS and MOD functions			YES	
 T461	Symmetric BETWEEN predicate			YES	
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index a951c55..2ac3c61 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -82,6 +82,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -978,6 +981,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
@@ -1816,18 +1823,78 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 NULL, NULL, NULL,
-							 ancestors, es);
+
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 NULL, NULL, NULL,
+								 ancestors, es);
+
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	bool		useprefix;
+	char	   *exprstr;
+	ListCell   *lc;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = set_deparse_context_planstate(es->deparse_cxt,
+											(Node *) planstate,
+											ancestors);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2444,6 +2511,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index d94fe58..97bfbbc 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,9 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						ExprContext *econtext,
+						bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -558,6 +563,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -635,8 +642,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -684,6 +707,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -3016,6 +3064,44 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingFuncExpr
+ *
+ * Return a bitmask with a bit for each (unevaluated) argument expression
+ * (rightmost arg is least significant bit).
+ *
+ * A bit is set if the corresponding expression is NOT part of the set of
+ * grouping expressions in the current grouping set.
+ */
+
+static Datum
+ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						 ExprContext *econtext,
+						 bool *isNull,
+						 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int attnum = 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		attnum = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4418,6 +4504,11 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
@@ -4486,6 +4577,27 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state = (ExprState *) astate;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grp_node = (GroupingFunc *) node;
+				GroupingFuncExprState *grp_state = makeNode(GroupingFuncExprState);
+				Agg		   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingFuncExpr;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 022041b..8709b68 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
@@ -651,9 +652,10 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	/*
 	 * Don't examine the arguments or filters of Aggrefs or WindowFuncs,
 	 * because those do not represent expressions to be evaluated within the
-	 * overall targetlist's econtext.
+	 * overall targetlist's econtext.  GroupingFunc arguments are never
+	 * evaluated at all.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 9ff0eff..213c15c 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -45,15 +45,19 @@
  *	  needed to allow resolution of a polymorphic aggregate's result type.
  *
  *	  We compute aggregate input expressions and run the transition functions
- *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at
- *	  least once per input tuple, so when the transvalue datatype is
+ *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at least
+ *	  once per input tuple, so when the transvalue datatype is
  *	  pass-by-reference, we have to be careful to copy it into a longer-lived
- *	  memory context, and free the prior value to avoid memory leakage.
- *	  We store transvalues in the memory context aggstate->aggcontext,
- *	  which is also used for the hashtable structures in AGG_HASHED mode.
- *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext)
- *	  is used to run finalize functions and compute the output tuple;
- *	  this context can be reset once per output tuple.
+ *	  memory context, and free the prior value to avoid memory leakage.  We
+ *	  store transvalues in another set of econtexts, aggstate->aggcontexts (one
+ *	  per grouping set, see below), which are also used for the hashtable
+ *	  structures in AGG_HASHED mode.  These econtexts are rescanned, not just
+ *	  reset, at group boundaries so that aggregate transition functions can
+ *	  register shutdown callbacks via AggRegisterCallback.
+ *
+ *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext) is used to
+ *	  run finalize functions and compute the output tuple; this context can be
+ *	  reset once per output tuple.
  *
  *	  The executor's AggState node is passed as the fmgr "context" value in
  *	  all transfunc and finalfunc calls.  It is not recommended that the
@@ -84,6 +88,48 @@
  *	  need some fallback logic to use this, since there's no Aggref node
  *	  for a window function.)
  *
+ *	  Grouping sets:
+ *
+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.
+ *
+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.
+ *
+ *	  (Because the AGG_CHAINED node does not project aggregate values into the
+ *	  main executor path, its targetlist and qual are dummy, and it gets the
+ *	  real aggregate targetlist and qual from the chain head node.)
+ *
+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.
+ *
+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.
+ *
+ *	  TODO: AGG_HASHED doesn't support multiple grouping sets yet.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -241,9 +287,11 @@ typedef struct AggStatePerAggData
 	 * then at completion of the input tuple group, we scan the sorted values,
 	 * eliminate duplicates if needed, and run the transition function on the
 	 * rest.
+	 *
+	 * We need a separate tuplesort for each grouping set.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstates;	/* sort objects, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +352,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReset);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -325,6 +374,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -333,90 +383,109 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 /*
  * Initialize all aggregates for a new group of input values.
  *
+ * If there are multiple grouping sets, we initialize only the first numReset
+ * of them (the grouping sets are ordered so that the most specific one, which
+ * is reset most often, is first). As a convenience, if numReset is < 1, we
+ * reinitialize all sets.
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReset)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         setno = 0;
+
+	if (numReset < 1)
+		numReset = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (setno = 0; setno < numReset; setno++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstates[setno])
+					tuplesort_end(peraggstate->sortstates[setno]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 *
-			 * In the future, we should consider forcing the
-			 * tuplesort_begin_heap() case when the abbreviated key
-			 * optimization can thereby be used, even when numInputs is 1.
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 *
+				 * In the future, we should consider forcing the
+				 * tuplesort_begin_heap() case when the abbreviated key
+				 * optimization can thereby be used, even when numInputs is 1.
+				 */
+				peraggstate->sortstates[setno] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
-		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+		for (setno = 0; setno < numReset; setno++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
+
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[setno]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
 		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
-
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -459,7 +528,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -507,7 +576,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -534,11 +603,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -582,13 +653,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
 		}
 		else
 		{
@@ -604,7 +678,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * numAggs)];
+
+				aggstate->current_set = setno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -627,6 +708,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -646,7 +730,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -658,7 +742,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstates[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -702,8 +786,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
@@ -713,6 +797,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -729,13 +816,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstates[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -783,13 +870,16 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
  * Compute the final value of one aggregate.
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * The finalfunction will be run, and the result delivered, in the
  * output-tuple context; caller's CurrentMemoryContext does not matter.
  */
@@ -836,7 +926,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -920,7 +1010,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -950,7 +1041,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontexts[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1061,7 +1152,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1083,6 +1174,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1090,7 +1183,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1101,22 +1193,48 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
+	if (!node->agg_done)
+	{
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
+	}
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	/*
+	 * We've completed all locally computed projections, now we drain the side
+	 * channel of projections from chained nodes if any.
+	 */
+	if (!node->chain_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	return NULL;
 }
 
 /*
@@ -1136,6 +1254,12 @@ agg_retrieve_direct(AggState *aggstate)
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
 	int			aggno;
+	bool		hasGroupingSets = aggstate->numsets > 0;
+	int			numGroupingSets = Max(aggstate->numsets, 1);
+	int			currentSet = 0;
+	int			nextSetSize = 0;
+	int			numReset = 1;
+	int			i;
 
 	/*
 	 * get state info from node
@@ -1154,39 +1278,20 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
-		 * group).  We also clear any child contexts of the aggcontext; some
-		 * aggregate functions store working state in such contexts.
+		 * group).  Some aggregate functions store working state in child
+		 * contexts; those now get reset automatically without us needing to
+		 * do anything special.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
 		 * any registered shutdown callbacks to be called.  That allows
@@ -1195,90 +1300,222 @@ agg_retrieve_direct(AggState *aggstate)
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		/*
+		 * Determine how many grouping sets need to be reset at this boundary.
+		 */
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontexts[i]);
+		}
+
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
 
 		/*
-		 * Initialize working state for a new input tuple group
+		 * Get the number of columns in the next grouping set after the last
+		 * projected one (if any). This is the number of columns to compare to
+		 * see if we reached the boundary of that set too.
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			nextSetSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			nextSetSize = 0;
 
-		if (aggstate->grp_firstTuple != NULL)
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
+		 */
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& nextSetSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									nextSetSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			aggstate->projected_set += 1;
+
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(nextSetSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * We no longer care what group we just projected, the next
+			 * projection will always be the first (or only) grouping set
+			 * (unless the input proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch
+			 * it from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
-
-				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
-				 */
-				if (node->aggstrategy == AGG_SORTED)
+				else
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					/* outer plan produced no tuples at all */
+					if (hasGroupingSets)
 					{
 						/*
-						 * Save the first input tuple of the next group.
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
 						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group.
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
+
+				/*
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
+				 */
+				for (;;)
+				{
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
+					{
+						/* no more outer-plan tuples available */
+						if (hasGroupingSets)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentSet = aggstate->projected_set;
+
+		if (hasGroupingSets)
+			econtext->grouped_cols = aggstate->grouped_cols[currentSet];
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1326,6 +1563,174 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that might have
+	 * been generated by previous rows, and if this is not the first row, then
+	 * grp_firsttuple has the representative input row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontexts[currentSet]);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+
+		/*
+		 * We're out of input, so the calling node has all the data it needs
+		 * and (if it's a Sort) is about to sort it. We preemptively request a
+		 * rescan of our input plan here, so that Sort nodes containing data
+		 * that is no longer needed will free their memory.  The intention here
+		 * is to bound the peak memory requirement for the whole chain to
+		 * 2*work_mem if REWIND was not requested, or 3*work_mem if REWIND was
+		 * requested and we had to supply a Sort node for the original data
+		 * source plan.
+		 */
+
+		ExecReScan(outerPlanState(aggstate));
+
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	/* Actually accumulate the current tuple. */
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1493,12 +1898,17 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int			numGroupingSets = 1;
+	int			currentsortno = 0;
+	int			i = 0;
+	int			j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1512,40 +1922,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_eflags = eflags & EXEC_FLAG_REWIND;
+	aggstate->chain_top = false;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
+
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontexts = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts (aggcontexts) replaces the standalone
+	 * memory context formerly used to hold transition values.  We cheat a
+	 * little by using ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontexts and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
-
-	/*
 	 * tuple table initialization
 	 */
 	ExecInitScanTupleSlot(estate, &aggstate->ss);
@@ -1561,24 +2009,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		AggState   *chain_head = estate->agg_chain_head;
+		Agg		   *chain_head_plan;
+
+		Assert(chain_head);
+
+		aggstate->chain_head = chain_head;
+		chain_head->chain_depth++;
+
+		chain_head_plan = (Agg *) chain_head->ss.ps.plan;
+
+		/*
+		 * If we reached the originally declared depth, we must be the "top"
+		 * (furthest from plan root) node in the chain.
+		 */
+		if (chain_head_plan->chain_depth == chain_head->chain_depth)
+			aggstate->chain_top = true;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_depth > 0)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
-	 * initialize child nodes
+	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
+	 *
+	 * If we have more than one associated ChainAggregate node, then we turn
+	 * off REWIND and restore it in the chain top, so that the intermediate
+	 * Sort nodes will discard their data on rescan.  This lets us put an upper
+	 * bound on the memory usage, even when we have a long chain of sorts (at
+	 * the cost of having to re-sort on rewind, which is why we don't do it
+	 * for only one node where no memory would be saved).
 	 */
-	if (node->aggstrategy == AGG_HASHED)
+	if (aggstate->chain_top)
+		eflags |= aggstate->chain_head->chain_eflags;
+	else if (node->aggstrategy == AGG_HASHED || node->chain_depth > 1)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_depth > 0)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1587,8 +2089,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -1649,7 +2178,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
+											  * numaggs
+											  * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1712,7 +2244,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstates = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstates[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2020,31 +2555,38 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			setno;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			if (peraggstate->sortstates[setno])
+				tuplesort_end(peraggstate->sortstates[setno]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (setno = 0; setno < numGroupingSets; setno++)
+		ReScanExprContext(node->aggcontexts[setno]);
+
+	if (node->chain_tuplestore && node->chain_depth > 0)
+		tuplestore_end(node->chain_tuplestore);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2053,13 +2595,16 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         setno;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2085,14 +2630,34 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstates[setno])
+			{
+				tuplesort_end(peraggstate->sortstates[setno]);
+				peraggstate->sortstates[setno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them. (We use rescan
+	 * rather than just reset because transfns may have registered callbacks
+	 * that need to be run now.)
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. This used to be an issue, but now, resetting a
+	 * context automatically deletes sub-contexts too.
+	 */
+
+	for (setno = 0; setno < numGroupingSets; setno++)
+	{
+		ReScanExprContext(node->aggcontexts[setno]);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2100,21 +2665,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2126,15 +2683,54 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed.  Since we're preempting the rescan
+	 * in some cases, we only check whether any ChainAgg node was reached in
+	 * the rescan; the others may have already been reset.
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_depth > 0)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan > 0)
+				tuplestore_clear(node->chain_tuplestore);
+			else
+				tuplestore_rescan(node->chain_tuplestore);
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
@@ -2154,8 +2750,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2163,7 +2762,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2247,8 +2850,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 9fe8008..5e95fc08 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -804,6 +804,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_depth);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
@@ -811,6 +812,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1097,6 +1099,27 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -1179,6 +1202,23 @@ _copyAggref(const Aggref *from)
 }
 
 /*
+ * _copyGroupingFunc
+ */
+static GroupingFunc *
+_copyGroupingFunc(const GroupingFunc *from)
+{
+	GroupingFunc	   *newnode = makeNode(GroupingFunc);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_SCALAR_FIELD(agglevelsup);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyWindowFunc
  */
 static WindowFunc *
@@ -2080,6 +2120,18 @@ _copySortGroupClause(const SortGroupClause *from)
 	return newnode;
 }
 
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
 static WindowClause *
 _copyWindowClause(const WindowClause *from)
 {
@@ -2530,6 +2582,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4147,6 +4200,9 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
@@ -4156,6 +4212,9 @@ copyObject(const void *from)
 		case T_Aggref:
 			retval = _copyAggref(from);
 			break;
+		case T_GroupingFunc:
+			retval = _copyGroupingFunc(from);
+			break;
 		case T_WindowFunc:
 			retval = _copyWindowFunc(from);
 			break;
@@ -4716,6 +4775,9 @@ copyObject(const void *from)
 		case T_SortGroupClause:
 			retval = _copySortGroupClause(from);
 			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_WindowClause:
 			retval = _copyWindowClause(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index fe509b0..5938ebc 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,22 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -208,6 +224,21 @@ _equalAggref(const Aggref *a, const Aggref *b)
 }
 
 static bool
+_equalGroupingFunc(const GroupingFunc *a, const GroupingFunc *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_SCALAR_FIELD(agglevelsup);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowFunc(const WindowFunc *a, const WindowFunc *b)
 {
 	COMPARE_SCALAR_FIELD(winfnoid);
@@ -867,6 +898,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2391,6 +2423,16 @@ _equalSortGroupClause(const SortGroupClause *a, const SortGroupClause *b)
 }
 
 static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowClause(const WindowClause *a, const WindowClause *b)
 {
 	COMPARE_STRING_FIELD(name);
@@ -2585,6 +2627,9 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
@@ -2594,6 +2639,9 @@ equal(const void *a, const void *b)
 		case T_Aggref:
 			retval = _equalAggref(a, b);
 			break;
+		case T_GroupingFunc:
+			retval = _equalGroupingFunc(a, b);
+			break;
 		case T_WindowFunc:
 			retval = _equalWindowFunc(a, b);
 			break;
@@ -3141,6 +3189,9 @@ equal(const void *a, const void *b)
 		case T_SortGroupClause:
 			retval = _equalSortGroupClause(a, b);
 			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_WindowClause:
 			retval = _equalWindowClause(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 94cab47..a6737514 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 6fdf44d..a9b58eb 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index d6f1f5b..4caf559 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,9 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -54,6 +57,9 @@ exprType(const Node *expr)
 		case T_Aggref:
 			type = ((const Aggref *) expr)->aggtype;
 			break;
+		case T_GroupingFunc:
+			type = INT4OID;
+			break;
 		case T_WindowFunc:
 			type = ((const WindowFunc *) expr)->wintype;
 			break;
@@ -261,6 +267,8 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +742,9 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -743,6 +754,9 @@ exprCollation(const Node *expr)
 		case T_Aggref:
 			coll = ((const Aggref *) expr)->aggcollid;
 			break;
+		case T_GroupingFunc:
+			coll = InvalidOid;
+			break;
 		case T_WindowFunc:
 			coll = ((const WindowFunc *) expr)->wincollid;
 			break;
@@ -967,6 +981,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -976,6 +993,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Aggref:
 			((Aggref *) expr)->aggcollid = collation;
 			break;
+		case T_GroupingFunc:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_WindowFunc:
 			((WindowFunc *) expr)->wincollid = collation;
 			break;
@@ -1182,6 +1202,9 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1192,6 +1215,9 @@ exprLocation(const Node *expr)
 			/* function name should always be the first thing */
 			loc = ((const Aggref *) expr)->location;
 			break;
+		case T_GroupingFunc:
+			loc = ((const GroupingFunc *) expr)->location;
+			break;
 		case T_WindowFunc:
 			/* function name should always be the first thing */
 			loc = ((const WindowFunc *) expr)->location;
@@ -1481,6 +1507,9 @@ exprLocation(const Node *expr)
 			/* XMLSERIALIZE keyword should always be the first thing */
 			loc = ((const XmlSerialize *) expr)->location;
 			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_WithClause:
 			loc = ((const WithClause *) expr)->location;
 			break;
@@ -1632,6 +1661,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1665,6 +1695,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grouping = (GroupingFunc *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2154,6 +2193,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2195,6 +2243,29 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc   *grouping = (GroupingFunc *) node;
+				GroupingFunc   *newnode;
+
+				FLATCOPY(newnode, grouping, GroupingFunc);
+				MUTATE(newnode->args, grouping->args, List *);
+
+				/*
+				 * We assume here that mutating the arguments does not change
+				 * the semantics, i.e. that the arguments are not mutated in a
+				 * way that makes them semantically different from their
+				 * previously matching expressions in the GROUP BY clause.
+				 *
+				 * If a mutator somehow wanted to do this, it would have to
+				 * handle the refs and cols lists itself as appropriate.
+				 */
+				newnode->refs = list_copy(grouping->refs);
+				newnode->cols = list_copy(grouping->cols);
+
+				return (Node *) newnode;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
@@ -2880,6 +2951,8 @@ raw_expression_tree_walker(Node *node,
 			break;
 		case T_RangeVar:
 			return walker(((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return walker(((GroupingFunc *) node)->args, context);
 		case T_SubLink:
 			{
 				SubLink    *sublink = (SubLink *) node;
@@ -3203,6 +3276,8 @@ raw_expression_tree_walker(Node *node,
 				/* for now, constraints are ignored */
 			}
 			break;
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		case T_LockingClause:
 			return walker(((LockingClause *) node)->lockedRels, context);
 		case T_XmlSerialize:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 775f482..7a6667a 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -648,6 +648,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_INT_FIELD(chain_depth);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
@@ -659,6 +660,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -928,6 +931,22 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -982,6 +1001,18 @@ _outAggref(StringInfo str, const Aggref *node)
 }
 
 static void
+_outGroupingFunc(StringInfo str, const GroupingFunc *node)
+{
+	WRITE_NODE_TYPE("GROUPINGFUNC");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_INT_FIELD(agglevelsup);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowFunc(StringInfo str, const WindowFunc *node)
 {
 	WRITE_NODE_TYPE("WINDOWFUNC");
@@ -2310,6 +2341,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2344,6 +2376,16 @@ _outSortGroupClause(StringInfo str, const SortGroupClause *node)
 }
 
 static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowClause(StringInfo str, const WindowClause *node)
 {
 	WRITE_NODE_TYPE("WINDOWCLAUSE");
@@ -2985,6 +3027,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
@@ -2994,6 +3039,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Aggref:
 				_outAggref(str, obj);
 				break;
+			case T_GroupingFunc:
+				_outGroupingFunc(str, obj);
+				break;
 			case T_WindowFunc:
 				_outWindowFunc(str, obj);
 				break;
@@ -3251,6 +3299,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_SortGroupClause:
 				_outSortGroupClause(str, obj);
 				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_WindowClause:
 				_outWindowClause(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 563209c..b35a9d3 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -216,6 +216,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -291,6 +292,21 @@ _readSortGroupClause(void)
 }
 
 /*
+ * _readGroupingSet
+ */
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowClause
  */
 static WindowClause *
@@ -441,6 +457,27 @@ _readVar(void)
 }
 
 /*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readConst
  */
 static Const *
@@ -510,6 +547,23 @@ _readAggref(void)
 }
 
 /*
+ * _readGroupingFunc
+ */
+static GroupingFunc *
+_readGroupingFunc(void)
+{
+	READ_LOCALS(GroupingFunc);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_INT_FIELD(agglevelsup);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowFunc
  */
 static WindowFunc *
@@ -1307,6 +1361,8 @@ parseNodeString(void)
 		return_value = _readWithCheckOption();
 	else if (MATCH("SORTGROUPCLAUSE", 15))
 		return_value = _readSortGroupClause();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("WINDOWCLAUSE", 12))
 		return_value = _readWindowClause();
 	else if (MATCH("ROWMARKCLAUSE", 13))
@@ -1323,12 +1379,16 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
 		return_value = _readParam();
 	else if (MATCH("AGGREF", 6))
 		return_value = _readAggref();
+	else if (MATCH("GROUPINGFUNC", 12))
+		return_value = _readGroupingFunc();
 	else if (MATCH("WINDOWFUNC", 10))
 		return_value = _readWindowFunc();
 	else if (MATCH("ARRAYREF", 8))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 58d78e6..2c05f71 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1241,6 +1241,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2099,7 +2100,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 11d3933..fa1de6a 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -581,6 +581,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -649,10 +650,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -668,6 +669,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index cb69c03..7b2e390 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1029,6 +1029,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
+								 NULL,
 								 numGroups,
 								 subplan);
 	}
@@ -4360,6 +4362,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets, int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4369,6 +4372,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_depth = chain_depth_p ? *chain_depth_p : 0;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4389,10 +4393,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
@@ -4411,8 +4417,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_depth_p);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index af772a2..f0e9c05 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index b02a107..52740fd 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,12 +16,14 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +39,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -65,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -77,7 +81,9 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -317,6 +323,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -533,7 +541,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1193,11 +1202,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1206,15 +1210,90 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
 		if (parse->groupClause)
-			preprocess_groupclause(root);
+		{
+			ListCell   *lc;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+		}
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
+
+			foreach(lc_set, sets)
+			{
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
+
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1287,6 +1366,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1298,6 +1378,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1324,15 +1405,46 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
 		{
 			List	   *groupExprs;
 
-			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
-												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc,
+						   *lc2;
+
+				dNumGroups = 0;
+
+				forboth(lc, rollup_groupclauses, lc2, rollup_lists)
+				{
+					ListCell   *lc3;
+
+					groupExprs = get_sortgrouplist_exprs(lfirst(lc),
+														 parse->targetList);
+
+					foreach(lc3, lfirst(lc2))
+					{
+						List   *gset = lfirst(lc3);
+
+						dNumGroups += estimate_num_groups(root,
+														  groupExprs,
+														  path_rows,
+														  &gset);
+					}
+				}
+			}
+			else
+			{
+				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
+													 parse->targetList);
+
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
+			}
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1343,6 +1455,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1358,14 +1473,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
-			 * and it will deliver a single result row (so leave dNumGroups
-			 * set to 1).
+			 * and it will deliver a single result row per grouping set (or 1
+			 * if no grouping sets were explicitly given, in which case leave
+			 * dNumGroups as-is)
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1380,7 +1498,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1463,13 +1581,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping.
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1535,7 +1664,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1610,52 +1739,118 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
+												NIL,
+												NULL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				int			chain_depth = 0;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						if (gsets)
+						{
+							Assert(refmap);
+
+							/*
+							 * need to remap groupColIdx, which has the column
+							 * indices for every clause in parse->groupClause
+							 * indexed by list position, to a local version for
+							 * this node which lists only the clauses included in
+							 * groupClause by position in that list. The refmap for
+							 * this node (indexed by sortgroupref) contains 0 for
+							 * clauses not present in this node's groupClause.
+							 */
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+
+							/*
+							 * If there aren't any other chained aggregates, then
+							 * we didn't disturb the originally required input
+							 * sort order.
+							 */
+							if (chain_depth == 0)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
-					current_pathkeys = NIL;
-				}
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													(aggstrategy != AGG_CHAINED) ? &chain_depth : NULL,
+													numGroups,
+													result_plan);
+
+					chain_depth += 1;
 
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												numGroups,
-												result_plan);
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
+				}
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -1686,27 +1881,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1869,7 +2103,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1910,6 +2144,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
+											NULL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2543,19 +2779,38 @@ limit_needed(Query *parse)
  *
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
+ *
+ * For grouping sets, the order of items is instead forced to agree with that
+ * of the grouping set (and items not in the grouping set are skipped). The
+ * work of sorting the order of grouping set elements to match the ORDER BY if
+ * possible is done elsewhere.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2563,7 +2818,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2587,7 +2841,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2604,15 +2858,446 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
+ *
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.  This implementation is based on pseudocode
+ * found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
+ *
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets)
+{
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
+	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
+
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
+	{
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
+
+		foreach(lc2, candidate)
+		{
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
+			{
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
+			}
+		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
+
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
+		{
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
+		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
+		else
+			chains[i] = ++num_chains;
+	}
+
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
+	/*
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
+	 */
+
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
+
+	foreach(lc, groupingsets)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					/* diverged from the sortclause; give up on it */
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+
+	return result;
 }
 
 /*
@@ -2631,11 +3316,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
@@ -3060,7 +3745,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index ec828cd..11c9e82 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -67,6 +67,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -133,6 +139,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -660,6 +668,17 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1074,6 +1093,7 @@ copyVar(Var *var)
  * We must look up operator opcode info for OpExpr and related nodes,
  * add OIDs from regclass Const nodes into root->glob->relationOids, and
  * add catalog TIDs for user-defined functions into root->glob->invalItems.
+ * We also fill in column index lists for GROUPING() expressions.
  *
  * We assume it's okay to update opcode info in-place.  So this could possibly
  * scribble on the planner's input data structures, but it's OK.
@@ -1137,6 +1157,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *g = (GroupingFunc *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1264,6 +1309,98 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
+	context.root = root;
+
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref))
+	{
+		/*
+		 * don't recurse into the arguments or filter of Aggrefs, since they
+		 * see the values prior to grouping.  But do recurse into direct args
+		 * if any.
+		 */
+
+		if (((Aggref *)node)->aggdirectargs != NIL)
+		{
+			Aggref *newnode = palloc(sizeof(Aggref));
+
+			memcpy(newnode, node, sizeof(Aggref));
+
+			newnode->aggdirectargs
+				= (List *) expression_tree_mutator((Node *) newnode->aggdirectargs,
+												   set_group_vars_mutator,
+												   (void *) context);
+
+			return (Node *) newnode;
+		}
+
+		return node;
+	}
+	else if (IsA(node, GroupingFunc))
+	{
+		/*
+		 * GroupingFuncs don't see the values at all.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 5a1d539..afecea4 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -336,6 +337,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given GroupingFunc expression which is
+ * expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the GroupingFunc belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (GroupingFunc *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1490,14 +1533,16 @@ simplify_EXISTS_query(PlannerInfo *root, Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, OFFSET, or FOR UPDATE/SHARE; none
-	 * of these seem likely in normal usage and their possible effects are
-	 * complex.  (Note: we could ignore an "OFFSET 0" clause, but that
-	 * traditionally is used as an optimization fence, so we don't.)
+	 * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.  (Note: we could ignore an "OFFSET 0"
+	 * clause, but that traditionally is used as an optimization fence, so we
+	 * don't.)
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1847,6 +1892,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (GroupingFunc *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
@@ -2077,7 +2127,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2128,7 +2178,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2343,7 +2393,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2359,7 +2410,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2375,7 +2427,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2391,7 +2444,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2407,7 +2461,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2474,8 +2529,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_depth > 0)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2492,7 +2569,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2501,7 +2579,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2512,7 +2591,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 8a0199b..00ae12c 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1297,6 +1297,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index b90fee3..d8a6391 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,8 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
+								 NULL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 84d58ae..ccbc670 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4307,6 +4307,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 1395a21..e88f728 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1338,7 +1338,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	/* Estimate number of output rows */
-	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);
+	pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows, NULL);
 	numCols = list_length(uniq_exprs);
 
 	if (all_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index a1a504b..f702b8c 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index 8f86432..0f25539 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index a68f2e8..fe93b87 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -964,6 +964,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1010,7 +1011,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1470,7 +1471,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 581f7a1..f491979 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -366,6 +366,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -431,7 +435,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -553,7 +557,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -568,7 +572,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -602,11 +606,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -664,6 +668,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -674,7 +683,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %nonassoc	NOTNULL
 %nonassoc	ISNULL
@@ -10153,11 +10162,79 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11742,15 +11819,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  GroupingFunc *g = makeNode(GroupingFunc);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12500,6 +12595,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13229,6 +13331,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13377,6 +13480,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13391,6 +13495,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13474,6 +13579,7 @@ col_name_keyword:
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 7b0e668..19391d0 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingFunc
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingFunc(ParseState *pstate, GroupingFunc *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	GroupingFunc *result = makeNode(GroupingFunc);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		GroupingFunc *grp = (GroupingFunc *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		int			agglevelsup = ((GroupingFunc *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,67 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = linitial(gsets);
+
+		if (gset_common)
+		{
+			for_each_cell(l, lnext(list_head(gsets)))
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1006,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1040,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1072,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1132,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1196,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/* handled GroupingFunc separately, no need to recheck at this level */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1217,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1246,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1285,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1330,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/*
+		 * We only need to check GroupingFunc nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_asc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, shortest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 8d90b50..b965b64 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1664,40 +1665,182 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
+/*
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
+ *
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
+ *
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
+ */
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
+{
+	/* just in case of pathological input */
+	check_stack_depth();
+
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
+	{
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
+
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
+
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Transform a single expression within a GROUP BY clause or grouping set.
+ *
+ * The expression is added to the targetlist if not already present, and to the
+ * flatresult list (which will become the groupClause) if not already present
+ * there.  The sortClause is consulted for operator and sort order hints.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Returns the ressortgroupref of the expression.
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * seen_local	bitmapset of sortgrouprefs already seen at the local level
+ * pstate		ParseState
+ * gexpr		node to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	TargetEntry *tle;
+	bool		found = false;
 
-	foreach(gl, grouplist)
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
-
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
-
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1709,35 +1852,308 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
+
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+/*
+ * Transform a list of expressions within a GROUP BY clause or grouping set.
+ *
+ * The list of expressions belongs to a single clause within which duplicates
+ * can be safely eliminated.
+ *
+ * Returns an integer list of ressortgroupref values.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * list			nodes to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Transform a grouping set and (recursively) its content.
+ *
+ * The grouping set might be a GROUPING SETS node with other grouping sets
+ * inside it, but SETS within SETS have already been flattened out before
+ * reaching here.
+ *
+ * Returns the transformed node, which now contains SIMPLE nodes with lists
+ * of ressortgrouprefs rather than expressions.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * gset			grouping set to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ *
+ * Returns the transformed (flat) groupClause.
+ *
+ * pstate		ParseState
+ * grouplist	clause to transform
+ * groupingSets	reference to list to contain the grouping set tree
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1842,6 +2258,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 7829bcb..70665c2 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -214,6 +215,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 			result = transformMultiAssignRef(pstate, (MultiAssignRef *) expr);
 			break;
 
+		case T_GroupingFunc:
+			result = transformGroupingFunc(pstate, (GroupingFunc *) expr);
+			break;
+
 		case T_NamedArgExpr:
 			{
 				NamedArgExpr *na = (NamedArgExpr *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 3724330..7125b76 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1675,6 +1675,10 @@ FigureColnameInternal(Node *node, char **name)
 			break;
 		case T_CollateClause:
 			return FigureColnameInternal(((CollateClause *) node)->arg, name);
+		case T_GroupingFunc:
+			/* make GROUPING() act like a regular function */
+			*name = "grouping";
+			return 2;
 		case T_SubLink:
 			switch (((SubLink *) node)->subLinkType)
 			{
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index 9d2c280..0474c9c 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2109,7 +2109,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index df45708..8309010 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,12 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up)
+			return true;
+		/* else fall through to examine argument */
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +163,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up &&
+			((GroupingFunc *) node)->location >= 0)
+		{
+			context->agg_location = ((GroupingFunc *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -703,6 +718,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc   *grp = (GroupingFunc *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 2fa30be..e03b7c6 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -42,6 +42,7 @@
 #include "nodes/nodeFuncs.h"
 #include "optimizer/tlist.h"
 #include "parser/keywords.h"
+#include "parser/parse_node.h"
 #include "parser/parse_agg.h"
 #include "parser/parse_func.h"
 #include "parser/parse_oper.h"
@@ -103,6 +104,8 @@ typedef struct
 	int			wrapColumn;		/* max line length, or -1 for no limit */
 	int			indentLevel;	/* current indent level for prettyprint */
 	bool		varprefix;		/* TRUE to print prefixes on Vars */
+	ParseExprKind special_exprkind;	/* set only for exprkinds needing */
+									/* special handling */
 } deparse_context;
 
 /*
@@ -361,9 +364,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -411,8 +416,9 @@ static void printSubscripts(ArrayRef *aref, deparse_context *context);
 static char *get_relation_name(Oid relid);
 static char *generate_relation_name(Oid relid, List *namespaces);
 static char *generate_function_name(Oid funcid, int nargs,
-					   List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p);
+							List *argnames, Oid *argtypes,
+							bool has_variadic, bool *use_variadic_p,
+							ParseExprKind special_exprkind);
 static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2);
 static text *string_to_text(char *str);
 static char *flatten_reloptions(Oid relid);
@@ -870,6 +876,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 		context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		get_rule_expr(qual, &context, false);
 
@@ -879,7 +886,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 	appendStringInfo(&buf, "EXECUTE PROCEDURE %s(",
 					 generate_function_name(trigrec->tgfoid, 0,
 											NIL, NULL,
-											false, NULL));
+											false, NULL, EXPR_KIND_NONE));
 
 	if (trigrec->tgnargs > 0)
 	{
@@ -2476,6 +2483,7 @@ deparse_expression_pretty(Node *expr, List *dpcontext,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = WRAP_COLUMN_DEFAULT;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	get_rule_expr(expr, &context, showimplicit);
 
@@ -4073,6 +4081,7 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc,
 		context.prettyFlags = prettyFlags;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		set_deparse_for_query(&dpns, query, NIL);
 
@@ -4224,6 +4233,7 @@ get_query_def(Query *query, StringInfo buf, List *parentnamespace,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = wrapColumn;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	set_deparse_for_query(&dpns, query, parentnamespace);
 
@@ -4589,7 +4599,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4614,20 +4624,43 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
+		ParseExprKind	save_exprkind;
+
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		save_exprkind = context->special_exprkind;
+		context->special_exprkind = EXPR_KIND_GROUP_BY;
+
+		if (query->groupingSets == NIL)
+		{
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
+
+		context->special_exprkind = save_exprkind;
 	}
 
 	/* Add the HAVING clause if given */
@@ -4694,7 +4727,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4913,23 +4946,24 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4938,13 +4972,92 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4964,7 +5077,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5064,7 +5177,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5613,10 +5726,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5638,10 +5751,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5661,10 +5774,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5704,10 +5817,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6738,6 +6851,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -6750,6 +6867,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			get_agg_expr((Aggref *) node, context);
 			break;
 
+		case T_GroupingFunc:
+			{
+				GroupingFunc *gexpr = (GroupingFunc *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_WindowFunc:
 			get_windowfunc_expr((WindowFunc *) node, context);
 			break;
@@ -7788,7 +7915,8 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
 					 generate_function_name(funcoid, nargs,
 											argnames, argtypes,
 											expr->funcvariadic,
-											&use_variadic));
+											&use_variadic,
+											context->special_exprkind));
 	nargs = 0;
 	foreach(l, expr->args)
 	{
@@ -7820,7 +7948,8 @@ get_agg_expr(Aggref *aggref, deparse_context *context)
 					 generate_function_name(aggref->aggfnoid, nargs,
 											NIL, argtypes,
 											aggref->aggvariadic,
-											&use_variadic),
+											&use_variadic,
+											context->special_exprkind),
 					 (aggref->aggdistinct != NIL) ? "DISTINCT " : "");
 
 	if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
@@ -7910,7 +8039,8 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
 	appendStringInfo(buf, "%s(",
 					 generate_function_name(wfunc->winfnoid, nargs,
 											argnames, argtypes,
-											false, NULL));
+											false, NULL,
+											context->special_exprkind));
 	/* winstar can be set only in zero-argument aggregates */
 	if (wfunc->winstar)
 		appendStringInfoChar(buf, '*');
@@ -9147,7 +9277,8 @@ generate_relation_name(Oid relid, List *namespaces)
  */
 static char *
 generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p)
+					   bool has_variadic, bool *use_variadic_p,
+					   ParseExprKind special_exprkind)
 {
 	char	   *result;
 	HeapTuple	proctup;
@@ -9162,6 +9293,7 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	int			p_nvargs;
 	Oid			p_vatype;
 	Oid		   *p_true_typeids;
+	bool		force_qualify = false;
 
 	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
 	if (!HeapTupleIsValid(proctup))
@@ -9170,6 +9302,17 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	proname = NameStr(procform->proname);
 
 	/*
+	 * Thanks to parser hacks to avoid needing to reserve CUBE, we
+	 * need to force qualification in some special cases.
+	 */
+
+	if (special_exprkind == EXPR_KIND_GROUP_BY)
+	{
+		if (strcmp(proname, "cube") == 0 || strcmp(proname, "rollup") == 0)
+			force_qualify = true;
+	}
+
+	/*
 	 * Determine whether VARIADIC should be printed.  We must do this first
 	 * since it affects the lookup rules in func_get_detail().
 	 *
@@ -9200,14 +9343,23 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	/*
 	 * The idea here is to schema-qualify only if the parser would fail to
 	 * resolve the correct function given the unqualified func name with the
-	 * specified argtypes and VARIADIC flag.
+	 * specified argtypes and VARIADIC flag.  But if we already decided to
+	 * force qualification, then we can skip the lookup and pretend we didn't
+	 * find it.
 	 */
-	p_result = func_get_detail(list_make1(makeString(proname)),
-							   NIL, argnames, nargs, argtypes,
-							   !use_variadic, true,
-							   &p_funcid, &p_rettype,
-							   &p_retset, &p_nvargs, &p_vatype,
-							   &p_true_typeids, NULL);
+	if (!force_qualify)
+		p_result = func_get_detail(list_make1(makeString(proname)),
+								   NIL, argnames, nargs, argtypes,
+								   !use_variadic, true,
+								   &p_funcid, &p_rettype,
+								   &p_retset, &p_nvargs, &p_vatype,
+								   &p_true_typeids, NULL);
+	else
+	{
+		p_result = FUNCDETAIL_NOTFOUND;
+		p_funcid = InvalidOid;
+	}
+
 	if ((p_result == FUNCDETAIL_NORMAL ||
 		 p_result == FUNCDETAIL_AGGREGATE ||
 		 p_result == FUNCDETAIL_WINDOWFUNC) &&
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 4dd3f9f..83030cb 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index c9f7223..4df44d0 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -83,6 +83,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 59b17f3..4dff20b 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -407,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -595,6 +602,21 @@ typedef struct AggrefExprState
 } AggrefExprState;
 
 /* ----------------
+ *		GroupingFuncExprState node
+ *
+ * The list of column numbers refers to the input tuples of the Agg node to
+ * which the GroupingFunc belongs, and may contain 0 for references to columns
+ * that are only present in grouping sets processed by different Agg nodes (and
+ * which are therefore always considered "grouping" here).
+ * ----------------
+ */
+typedef struct GroupingFuncExprState
+{
+	ExprState	xprstate;
+	List	   *clauses;		/* integer list of column numbers */
+} GroupingFuncExprState;
+
+/* ----------------
  *		WindowFuncExprState node
  * ----------------
  */
@@ -1743,19 +1765,27 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontexts;	/* econtexts for long-lived data (per GS) */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
@@ -1765,6 +1795,12 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	int			chain_eflags;	/* saved eflags for rewind optimization */
+	bool		chain_top;		/* true for the "top" node in a chain */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 4dff6a0..01d9fed 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 97ef0fc..4d56f50 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -131,9 +131,11 @@ typedef enum NodeTag
 	T_RangeVar,
 	T_Expr,
 	T_Var,
+	T_GroupedVar,
 	T_Const,
 	T_Param,
 	T_Aggref,
+	T_GroupingFunc,
 	T_WindowFunc,
 	T_ArrayRef,
 	T_FuncExpr,
@@ -184,6 +186,7 @@ typedef enum NodeTag
 	T_GenericExprState,
 	T_WholeRowVarExprState,
 	T_AggrefExprState,
+	T_GroupingFuncExprState,
 	T_WindowFuncExprState,
 	T_ArrayRefExprState,
 	T_FuncExprState,
@@ -401,6 +404,7 @@ typedef enum NodeTag
 	T_RangeTblFunction,
 	T_WithCheckOption,
 	T_SortGroupClause,
+	T_GroupingSet,
 	T_WindowClause,
 	T_PrivGrantee,
 	T_FuncWithArgs,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ac13302..016d436 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -136,6 +136,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of GroupingSet's if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
@@ -940,6 +942,73 @@ typedef struct SortGroupClause
 } SortGroupClause;
 
 /*
+ * GroupingSet -
+ *		representation of CUBE, ROLLUP and GROUPING SETS clauses
+ *
+ * In a Query with grouping sets, the groupClause contains a flat list of
+ * SortGroupClause nodes for each distinct expression used.  The actual
+ * structure of the GROUP BY clause is given by the groupingSets tree.
+ *
+ * In the raw parser output, GroupingSet nodes (of all types except SIMPLE
+ * which is not used) are potentially mixed in with the expressions in the
+ * groupClause of the SelectStmt.  (An expression can't contain a GroupingSet,
+ * but a list may mix GroupingSet and expression nodes.)  At this stage, the
+ * content of each node is a list of expressions, some of which may be RowExprs
+ * which represent sublists rather than actual row constructors, and nested
+ * GroupingSet nodes where legal in the grammar.  The structure directly
+ * reflects the query syntax.
+ *
+ * In parse analysis, the transformed expressions are used to build the tlist
+ * and groupClause list (of SortGroupClause nodes), and the groupingSets tree
+ * is eventually reduced to a fixed format:
+ *
+ * EMPTY nodes represent (), and obviously have no content
+ *
+ * SIMPLE nodes represent a list of one or more expressions to be treated as an
+ * atom by the enclosing structure; the content is an integer list of
+ * ressortgroupref values (see SortGroupClause)
+ *
+ * CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
+ *
+ * SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
+ * parse analysis they cannot contain more SETS nodes; enough of the syntactic
+ * transforms of the spec have been applied that we no longer have arbitrarily
+ * deep nesting (though we still preserve the use of cube/rollup).
+ *
+ * Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
+ * nodes at the leaves), then the groupClause will be empty, but this is still
+ * an aggregation query (similar to using aggs or HAVING without GROUP BY).
+ *
+ * As an example, the following clause:
+ *
+ * GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
+ *
+ * looks like this after raw parsing:
+ *
+ * SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
+ *
+ * and parse analysis converts it to:
+ *
+ * SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
+ */
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	NodeTag		type;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
+/*
  * WindowClause -
  *		transformed representation of WINDOW and OVER clauses
  *
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index a175000..729456d 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f6683f0..a61b11f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -656,6 +656,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -663,10 +664,12 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	int			chain_depth;	/* number of associated ChainAggs in tree */
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 4f1d234..41fe778 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -160,6 +160,22 @@ typedef struct Var
 } Var;
 
 /*
+ * GroupedVar - expression node representing a variable that might be
+ * involved in a grouping set.
+ *
+ * This is identical to Var node except in execution; when evaluated it
+ * is conditionally NULL depending on the active grouping set.  Vars are
+ * converted to GroupedVars (if needed) only late in planning.
+ *
+ * (Because they appear only late in planning, most code that handles Vars
+ * doesn't need to know about these, either because they don't exist yet or
+ * because optimizations specific to Vars are intentionally not applied to
+ * GroupedVars.)
+ */
+
+typedef Var GroupedVar;
+
+/*
  * Const
  */
 typedef struct Const
@@ -268,6 +284,41 @@ typedef struct Aggref
 } Aggref;
 
 /*
+ * GroupingFunc
+ *
+ * A GroupingFunc is a GROUPING(...) expression, which behaves in many ways
+ * like an aggregate function (e.g. it "belongs" to a specific query level,
+ * which might not be the one immediately containing it), but also differs in
+ * an important respect: it never evaluates its arguments, they merely
+ * designate expressions from the GROUP BY clause of the query level to which
+ * it belongs.
+ *
+ * The spec defines the evaluation of GROUPING() purely by syntactic
+ * replacement, but we make it a real expression for optimization purposes so
+ * that one Agg node can handle multiple grouping sets at once.  Evaluating the
+ * result only needs the column positions to check against the grouping set
+ * being projected.  However, for EXPLAIN to produce meaningful output, we have
+ * to keep the original expressions around, since expression deparse does not
+ * give us any feasible way to get at the GROUP BY clause.
+ *
+ * Also, we treat two GroupingFunc nodes as equal if they have equal arguments
+ * lists and agglevelsup, without comparing the refs and cols annotations.
+ *
+ * In raw parse output we have only the args list; parse analysis fills in the
+ * refs list, and the planner fills in the cols list.
+ */
+typedef struct GroupingFunc
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+	int			location;		/* token location */
+} GroupingFunc;
+
+/*
  * WindowFunc
  */
 typedef struct WindowFunc
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 6845a40..ccfe66d 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -260,6 +260,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for GroupingFunc fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index fa72918..47cef55 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,8 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
+		 int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 3dc8bab..b0f0f19 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 7c243ec..0e4b719 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -324,6 +326,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -342,6 +345,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 91a0706..6a5f9bb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingFunc(ParseState *pstate, GroupingFunc *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index 6a4438f..fdf6732 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index bf69f2a..fdca713 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..fbfb424
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,575 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+ grouping | a |  array_agg  | rank | rank 
+----------+---+-------------+------+------
+        0 | 1 | {1,4,5}     |    1 |    1
+        0 | 3 | {1,2}       |    3 |    3
+        1 |   | {1,4,5,1,2} |    1 |    6
+(3 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index e0ae2f2..ef4e16b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address
+test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 7f762bd..3eb633f 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -84,6 +84,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..aebcbbb
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,153 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
+-- end
#110Svenne Krap
svenne@krap.dk
In reply to: Andrew Gierth (#109)
Re: WIP Patch for GROUPING SETS phase 1

Patch from message (87d24iukc5.fsf@news-spur.riddles.org.uk) fails (tried to apply on top of ebc0f5e01d2f ), as b55722692 has broken up the line (in src/backend/optimizer/util/pathnode.c):

pathnode->path.rows = estimate_num_groups(root, uniq_exprs, rel->rows);

After patching the added parameter (NULL) in by hand, the build fails as src/backend/optimizer/path/indxpath.c:1953 misses the new argument as well - this change is not in the patch.

/Svenne

The new status of this patch is: Waiting on Author

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#111Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andrew Gierth (#109)
1 attachment(s)
Re: Final Patch for GROUPING SETS

Updated patch:

- updated to latest head fixing conflicts with b5572269

--
Andrew (irc:RhodiumToad)

Attachments:

gsp-all-latest.patchtext/x-patchDownload
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 95616b3..f86164d 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2200,6 +2200,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->targetList);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2330,6 +2331,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) expr->aggfilter);
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grpnode = (GroupingFunc *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2607,6 +2615,12 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) lfirst(temp));
 			}
 			break;
+		case T_IntList:
+			foreach(temp, (List *) node)
+			{
+				APP_JUMB(lfirst_int(temp));
+			}
+			break;
 		case T_SortGroupClause:
 			{
 				SortGroupClause *sgc = (SortGroupClause *) node;
@@ -2617,6 +2631,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				APP_JUMB(sgc->nulls_first);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
 		case T_WindowClause:
 			{
 				WindowClause *wc = (WindowClause *) node;
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c198bea..17562be 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12069,7 +12069,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13167,6 +13169,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the current query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 7dbad46..56419c7 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1183,6 +1183,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 01d24a5..d2df959 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -621,23 +630,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group.
     (If there are aggregate functions but no <literal>GROUP BY</literal>
diff --git a/src/backend/catalog/sql_features.txt b/src/backend/catalog/sql_features.txt
index 3329264..db6a385 100644
--- a/src/backend/catalog/sql_features.txt
+++ b/src/backend/catalog/sql_features.txt
@@ -467,9 +467,9 @@ T331	Basic roles			YES
 T332	Extended roles			NO	mostly supported
 T341	Overloading of SQL-invoked functions and procedures			YES	
 T351	Bracketed SQL comments (/*...*/ comments)			YES	
-T431	Extended grouping capabilities			NO	
-T432	Nested and concatenated GROUPING SETS			NO	
-T433	Multiargument GROUPING function			NO	
+T431	Extended grouping capabilities			YES	
+T432	Nested and concatenated GROUPING SETS			YES	
+T433	Multiargument GROUPING function			YES	
 T434	GROUP BY DISTINCT			NO	
 T441	ABS and MOD functions			YES	
 T461	Symmetric BETWEEN predicate			YES	
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index a951c55..2ac3c61 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -82,6 +82,9 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+				int nkeys, AttrNumber *keycols, List *gsets,
+				List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -978,6 +981,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					pname = "GroupAggregate";
 					strategy = "Sorted";
 					break;
+				case AGG_CHAINED:
+					pname = "ChainAggregate";
+					strategy = "Chained";
+					break;
 				case AGG_HASHED:
 					pname = "HashAggregate";
 					strategy = "Hashed";
@@ -1816,18 +1823,78 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 NULL, NULL, NULL,
-							 ancestors, es);
+
+		if (plan->groupingSets)
+			show_grouping_set_keys(outerPlanState(astate), "Grouping Sets",
+								   plan->numCols, plan->grpColIdx,
+								   plan->groupingSets,
+								   ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 NULL, NULL, NULL,
+								 ancestors, es);
+
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_set_keys(PlanState *planstate, const char *qlabel,
+					   int nkeys, AttrNumber *keycols, List *gsets,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	List	   *context;
+	bool		useprefix;
+	char	   *exprstr;
+	ListCell   *lc;
+
+	if (gsets == NIL)
+		return;
+
+	/* Set up deparsing context */
+	context = set_deparse_context_planstate(es->deparse_cxt,
+											(Node *) planstate,
+											ancestors);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2444,6 +2511,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index d94fe58..97bfbbc 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -74,6 +74,8 @@ static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 				  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+					  bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
 					ExprContext *econtext,
 					bool *isNull, ExprDoneCond *isDone);
@@ -181,6 +183,9 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						ExprContext *econtext,
+						bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -558,6 +563,8 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
  * Note: ExecEvalScalarVar is executed only the first time through in a given
  * plan; it changes the ExprState's function pointer to pass control directly
  * to ExecEvalScalarVarFast after making one-time checks.
+ *
+ * We share this code with GroupedVar for simplicity.
  * ----------------------------------------------------------------
  */
 static Datum
@@ -635,8 +642,24 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
 		}
 	}
 
-	/* Skip the checking on future executions of node */
-	exprstate->evalfunc = ExecEvalScalarVarFast;
+	if (IsA(variable, GroupedVar))
+	{
+		Assert(variable->varno == OUTER_VAR);
+
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarGroupedVarFast;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+		{
+			*isNull = true;
+			return (Datum) 0;
+		}
+	}
+	else
+	{
+		/* Skip the checking on future executions of node */
+		exprstate->evalfunc = ExecEvalScalarVarFast;
+	}
 
 	/* Fetch the value from the slot */
 	return slot_getattr(slot, attnum, isNull);
@@ -684,6 +707,31 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
 	return slot_getattr(slot, attnum, isNull);
 }
 
+static Datum
+ExecEvalScalarGroupedVarFast(ExprState *exprstate, ExprContext *econtext,
+							 bool *isNull, ExprDoneCond *isDone)
+{
+	GroupedVar *variable = (GroupedVar *) exprstate->expr;
+	TupleTableSlot *slot;
+	AttrNumber	attnum;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	slot = econtext->ecxt_outertuple;
+
+	attnum = variable->varattno;
+
+	if (!bms_is_member(attnum, econtext->grouped_cols))
+	{
+		*isNull = true;
+		return (Datum) 0;
+	}
+
+	/* Fetch the value from the slot */
+	return slot_getattr(slot, attnum, isNull);
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalWholeRowVar
  *
@@ -3016,6 +3064,44 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingFuncExpr
+ *
+ * Return a bitmask with a bit for each (unevaluated) argument expression
+ * (rightmost arg is least significant bit).
+ *
+ * A bit is set if the corresponding expression is NOT part of the set of
+ * grouping expressions in the current grouping set.
+ */
+
+static Datum
+ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						 ExprContext *econtext,
+						 bool *isNull,
+						 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int attnum = 0;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		attnum = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(attnum, econtext->grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4418,6 +4504,11 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state->evalfunc = ExecEvalScalarVar;
 			}
 			break;
+		case T_GroupedVar:
+			Assert(((Var *) node)->varattno != InvalidAttrNumber);
+			state = (ExprState *) makeNode(ExprState);
+			state->evalfunc = ExecEvalScalarVar;
+			break;
 		case T_Const:
 			state = (ExprState *) makeNode(ExprState);
 			state->evalfunc = ExecEvalConst;
@@ -4486,6 +4577,27 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state = (ExprState *) astate;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grp_node = (GroupingFunc *) node;
+				GroupingFuncExprState *grp_state = makeNode(GroupingFuncExprState);
+				Agg		   *agg = NULL;
+
+				if (!parent
+					|| !IsA(parent->plan, Agg))
+					elog(ERROR, "Parent of GROUPING is not Agg node");
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingFuncExpr;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 022041b..8709b68 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -151,6 +151,7 @@ CreateExecutorState(void)
 	estate->es_epqTupleSet = NULL;
 	estate->es_epqScanDone = NULL;
 
+	estate->agg_chain_head = NULL;
 	/*
 	 * Return the executor state structure
 	 */
@@ -651,9 +652,10 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	/*
 	 * Don't examine the arguments or filters of Aggrefs or WindowFuncs,
 	 * because those do not represent expressions to be evaluated within the
-	 * overall targetlist's econtext.
+	 * overall targetlist's econtext.  GroupingFunc arguments are never
+	 * evaluated at all.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 9ff0eff..213c15c 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -45,15 +45,19 @@
  *	  needed to allow resolution of a polymorphic aggregate's result type.
  *
  *	  We compute aggregate input expressions and run the transition functions
- *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at
- *	  least once per input tuple, so when the transvalue datatype is
+ *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at least
+ *	  once per input tuple, so when the transvalue datatype is
  *	  pass-by-reference, we have to be careful to copy it into a longer-lived
- *	  memory context, and free the prior value to avoid memory leakage.
- *	  We store transvalues in the memory context aggstate->aggcontext,
- *	  which is also used for the hashtable structures in AGG_HASHED mode.
- *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext)
- *	  is used to run finalize functions and compute the output tuple;
- *	  this context can be reset once per output tuple.
+ *	  memory context, and free the prior value to avoid memory leakage.  We
+ *	  store transvalues in another set of econtexts, aggstate->aggcontexts (one
+ *	  per grouping set, see below), which are also used for the hashtable
+ *	  structures in AGG_HASHED mode.  These econtexts are rescanned, not just
+ *	  reset, at group boundaries so that aggregate transition functions can
+ *	  register shutdown callbacks via AggRegisterCallback.
+ *
+ *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext) is used to
+ *	  run finalize functions and compute the output tuple; this context can be
+ *	  reset once per output tuple.
  *
  *	  The executor's AggState node is passed as the fmgr "context" value in
  *	  all transfunc and finalfunc calls.  It is not recommended that the
@@ -84,6 +88,48 @@
  *	  need some fallback logic to use this, since there's no Aggref node
  *	  for a window function.)
  *
+ *	  Grouping sets:
+ *
+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.
+ *
+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.
+ *
+ *	  (Because the AGG_CHAINED node does not project aggregate values into the
+ *	  main executor path, its targetlist and qual are dummy, and it gets the
+ *	  real aggregate targetlist and qual from the chain head node.)
+ *
+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.
+ *
+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.
+ *
+ *	  TODO: AGG_HASHED doesn't support multiple grouping sets yet.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -241,9 +287,11 @@ typedef struct AggStatePerAggData
 	 * then at completion of the input tuple group, we scan the sorted values,
 	 * eliminate duplicates if needed, and run the transition function on the
 	 * rest.
+	 *
+	 * We need a separate tuplesort for each grouping set.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstates;	/* sort objects, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -304,7 +352,8 @@ typedef struct AggHashEntryData
 
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReset);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -325,6 +374,7 @@ static void build_hash_table(AggState *aggstate);
 static AggHashEntry lookup_hash_entry(AggState *aggstate,
 				  TupleTableSlot *inputslot);
 static TupleTableSlot *agg_retrieve_direct(AggState *aggstate);
+static TupleTableSlot *agg_retrieve_chained(AggState *aggstate);
 static void agg_fill_hash_table(AggState *aggstate);
 static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate);
 static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
@@ -333,90 +383,109 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 /*
  * Initialize all aggregates for a new group of input values.
  *
+ * If there are multiple grouping sets, we initialize only the first numReset
+ * of them (the grouping sets are ordered so that the most specific one, which
+ * is reset most often, is first). As a convenience, if numReset is < 1, we
+ * reinitialize all sets.
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
 initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+					  AggStatePerGroup pergroup,
+					  int numReset)
 {
 	int			aggno;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         setno = 0;
+
+	if (numReset < 1)
+		numReset = numGroupingSets;
 
 	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 
 		/*
 		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
 		 */
 		if (peraggstate->numSortCols > 0)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			for (setno = 0; setno < numReset; setno++)
+			{
+				/*
+				 * In case of rescan, maybe there could be an uncompleted sort
+				 * operation?  Clean it up if so.
+				 */
+				if (peraggstate->sortstates[setno])
+					tuplesort_end(peraggstate->sortstates[setno]);
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 *
-			 * In the future, we should consider forcing the
-			 * tuplesort_begin_heap() case when the abbreviated key
-			 * optimization can thereby be used, even when numInputs is 1.
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
-				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
-									  peraggstate->sortOperators[0],
-									  peraggstate->sortCollations[0],
-									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
-				tuplesort_begin_heap(peraggstate->evaldesc,
-									 peraggstate->numSortCols,
-									 peraggstate->sortColIdx,
-									 peraggstate->sortOperators,
-									 peraggstate->sortCollations,
-									 peraggstate->sortNullsFirst,
-									 work_mem, false);
+				/*
+				 * We use a plain Datum sorter when there's a single input column;
+				 * otherwise sort the full tuple.  (See comments for
+				 * process_ordered_aggregate_single.)
+				 *
+				 * In the future, we should consider forcing the
+				 * tuplesort_begin_heap() case when the abbreviated key
+				 * optimization can thereby be used, even when numInputs is 1.
+				 */
+				peraggstate->sortstates[setno] =
+					(peraggstate->numInputs == 1) ?
+					tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
+										  peraggstate->sortOperators[0],
+										  peraggstate->sortCollations[0],
+										  peraggstate->sortNullsFirst[0],
+										  work_mem, false) :
+					tuplesort_begin_heap(peraggstate->evaldesc,
+										 peraggstate->numSortCols,
+										 peraggstate->sortColIdx,
+										 peraggstate->sortOperators,
+										 peraggstate->sortCollations,
+										 peraggstate->sortNullsFirst,
+										 work_mem, false);
+			}
 		}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
-		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+		for (setno = 0; setno < numReset; setno++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * (aggstate->numaggs))];
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
+			/*
+			 * (Re)set transValue to the initial value.
+			 *
+			 * Note that when the initial value is pass-by-ref, we must copy it
+			 * (into the aggcontext) since we will pfree the transValue later.
+			 */
+			if (peraggstate->initValueIsNull)
+				pergroupstate->transValue = peraggstate->initValue;
+			else
+			{
+				MemoryContext oldContext;
+
+				oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[setno]->ecxt_per_tuple_memory);
+				pergroupstate->transValue = datumCopy(peraggstate->initValue,
+													  peraggstate->transtypeByVal,
+													  peraggstate->transtypeLen);
+				MemoryContextSwitchTo(oldContext);
+			}
+			pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+			/*
+			 * If the initial value for the transition state doesn't exist in the
+			 * pg_aggregate table then we will let the first non-NULL value
+			 * returned from the outer procNode become the initial value. (This is
+			 * useful for aggregates like max() and min().) The noTransValue flag
+			 * signals that we still need to do this.
+			 */
+			pergroupstate->noTransValue = peraggstate->initValueIsNull;
 		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
-
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -459,7 +528,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -507,7 +576,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -534,11 +603,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -582,13 +653,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
 		}
 		else
 		{
@@ -604,7 +678,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * numAggs)];
+
+				aggstate->current_set = setno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -627,6 +708,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -646,7 +730,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -658,7 +742,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
+	while (tuplesort_getdatum(peraggstate->sortstates[aggstate->current_set], true,
 							  newVal, isNull))
 	{
 		/*
@@ -702,8 +786,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
@@ -713,6 +797,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -729,13 +816,13 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstates[aggstate->current_set], true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -783,13 +870,16 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
  * Compute the final value of one aggregate.
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * The finalfunction will be run, and the result delivered, in the
  * output-tuple context; caller's CurrentMemoryContext does not matter.
  */
@@ -836,7 +926,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -920,7 +1010,8 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
+		/* do not descend into aggregate exprs */
 		return false;
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
@@ -950,7 +1041,7 @@ build_hash_table(AggState *aggstate)
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontexts[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1061,7 +1152,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1083,6 +1174,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1090,7 +1183,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1101,22 +1193,48 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
+	if (!node->agg_done)
+	{
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
+	}
 
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	/*
+	 * We've completed all locally computed projections, now we drain the side
+	 * channel of projections from chained nodes if any.
+	 */
+	if (!node->chain_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		Assert(node->chain_tuplestore);
+		result = node->ss.ps.ps_ResultTupleSlot;
+		ExecClearTuple(result);
+		if (tuplestore_gettupleslot(node->chain_tuplestore,
+									true, false, result))
+			return result;
+		node->chain_done = true;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	return NULL;
 }
 
 /*
@@ -1136,6 +1254,12 @@ agg_retrieve_direct(AggState *aggstate)
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
 	int			aggno;
+	bool		hasGroupingSets = aggstate->numsets > 0;
+	int			numGroupingSets = Max(aggstate->numsets, 1);
+	int			currentSet = 0;
+	int			nextSetSize = 0;
+	int			numReset = 1;
+	int			i;
 
 	/*
 	 * get state info from node
@@ -1154,39 +1278,20 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
-		 */
-		if (aggstate->grp_firstTuple == NULL)
-		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
-			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-			}
-			else
-			{
-				/* outer plan produced no tuples at all */
-				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
-			}
-		}
-
-		/*
 		 * Clear the per-output-tuple context for each group, as well as
 		 * aggcontext (which contains any pass-by-ref transvalues of the old
-		 * group).  We also clear any child contexts of the aggcontext; some
-		 * aggregate functions store working state in such contexts.
+		 * group).  Some aggregate functions store working state in child
+		 * contexts; those now get reset automatically without us needing to
+		 * do anything special.
 		 *
 		 * We use ReScanExprContext not just ResetExprContext because we want
 		 * any registered shutdown callbacks to be called.  That allows
@@ -1195,90 +1300,222 @@ agg_retrieve_direct(AggState *aggstate)
 		 */
 		ReScanExprContext(econtext);
 
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		/*
+		 * Determine how many grouping sets need to be reset at this boundary.
+		 */
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontexts[i]);
+		}
+
+		/* Check if input is complete and there are no more groups to project. */
+		if (aggstate->input_done == true
+			&& aggstate->projected_set >= (numGroupingSets - 1))
+		{
+			aggstate->agg_done = true;
+			break;
+		}
 
 		/*
-		 * Initialize working state for a new input tuple group
+		 * Get the number of columns in the next grouping set after the last
+		 * projected one (if any). This is the number of columns to compare to
+		 * see if we reached the boundary of that set too.
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->projected_set >= 0 && aggstate->projected_set < (numGroupingSets - 1))
+			nextSetSize = aggstate->gset_lengths[aggstate->projected_set + 1];
+		else
+			nextSetSize = 0;
 
-		if (aggstate->grp_firstTuple != NULL)
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
+		 */
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& nextSetSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									nextSetSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))
+		{
+			aggstate->projected_set += 1;
+
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(nextSetSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * We no longer care what group we just projected, the next
+			 * projection will always be the first (or only) grouping set
+			 * (unless the input proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group, fetch
+			 * it from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
 				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this for
+					 * comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
-
-				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
-				 */
-				if (node->aggstrategy == AGG_SORTED)
+				else
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					/* outer plan produced no tuples at all */
+					if (hasGroupingSets)
 					{
 						/*
-						 * Save the first input tuple of the next group.
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
 						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						aggstate->input_done = true;
+
+						while (aggstate->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								aggstate->agg_done = true;
+								return NULL;
+							}
+						}
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
+					}
+				}
+			}
+
+			/*
+			 * Initialize working state for a new input tuple group.
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
+
+			if (aggstate->grp_firstTuple != NULL)
+			{
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
+
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
+
+				/*
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
+				 */
+				for (;;)
+				{
+					advance_aggregates(aggstate, pergroup);
+
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
+
+					outerslot = ExecProcNode(outerPlan);
+					if (TupIsNull(outerslot))
+					{
+						/* no more outer-plan tuples available */
+						if (hasGroupingSets)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
 					}
 				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
+		Assert(aggstate->projected_set >= 0);
+
+		aggstate->current_set = currentSet = aggstate->projected_set;
+
+		if (hasGroupingSets)
+			econtext->grouped_cols = aggstate->grouped_cols[currentSet];
 
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
 		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
 		{
 			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
 
 			if (peraggstate->numSortCols > 0)
 			{
@@ -1326,6 +1563,174 @@ agg_retrieve_direct(AggState *aggstate)
 	return NULL;
 }
 
+
+/*
+ * ExecAgg for chained case (pullthrough mode)
+ */
+static TupleTableSlot *
+agg_retrieve_chained(AggState *aggstate)
+{
+	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	ExprContext *tmpcontext = aggstate->tmpcontext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	AggStatePerAgg peragg = aggstate->peragg;
+	AggStatePerGroup pergroup = aggstate->pergroup;
+	TupleTableSlot *outerslot;
+	TupleTableSlot *firstSlot = aggstate->ss.ss_ScanTupleSlot;
+	int			   aggno;
+	int            numGroupingSets = Max(aggstate->numsets, 1);
+	int            currentSet = 0;
+
+	/*
+	 * The invariants here are:
+	 *
+	 *  - when called, we've already projected every result that might have
+	 * been generated by previous rows, and if this is not the first row, then
+	 * grp_firsttuple has the representative input row.
+	 *
+	 *  - we must pull the outer plan exactly once and return that tuple. If
+	 * the outer plan ends, we project whatever needs projecting.
+	 */
+
+	outerslot = ExecProcNode(outerPlanState(aggstate));
+
+	/*
+	 * If this is the first row and it's empty, nothing to do.
+	 */
+
+	if (TupIsNull(firstSlot) && TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+		return outerslot;
+	}
+
+	/*
+	 * See if we need to project anything. (We don't need to worry about
+	 * grouping sets of size 0, the planner doesn't give us those.)
+	 */
+
+	econtext->ecxt_outertuple = firstSlot;
+
+	while (!TupIsNull(firstSlot)
+		   && (TupIsNull(outerslot)
+			   || !execTuplesMatch(firstSlot,
+								   outerslot,
+								   aggstate->gset_lengths[currentSet],
+								   node->grpColIdx,
+								   aggstate->eqfunctions,
+								   tmpcontext->ecxt_per_tuple_memory)))
+	{
+		aggstate->current_set = aggstate->projected_set = currentSet;
+
+		econtext->grouped_cols = aggstate->grouped_cols[currentSet];
+
+		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+		{
+			AggStatePerAgg peraggstate = &peragg[aggno];
+			AggStatePerGroup pergroupstate;
+
+			pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+			if (peraggstate->numSortCols > 0)
+			{
+				if (peraggstate->numInputs == 1)
+					process_ordered_aggregate_single(aggstate,
+													 peraggstate,
+													 pergroupstate);
+				else
+					process_ordered_aggregate_multi(aggstate,
+													peraggstate,
+													pergroupstate);
+			}
+
+			finalize_aggregate(aggstate, peraggstate, pergroupstate,
+							   &aggvalues[aggno], &aggnulls[aggno]);
+		}
+
+		/*
+		 * Check the qual (HAVING clause); if the group does not match, ignore
+		 * it.
+		 */
+		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+		{
+			/*
+			 * Form a projection tuple using the aggregate results
+			 * and the representative input tuple.
+			 */
+			TupleTableSlot *result;
+			ExprDoneCond isDone;
+
+			do
+			{
+				result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+				if (isDone != ExprEndResult)
+				{
+					tuplestore_puttupleslot(aggstate->chain_tuplestore,
+											result);
+				}
+			}
+			while (isDone == ExprMultipleResult);
+		}
+		else
+			InstrCountFiltered1(aggstate, 1);
+
+		ReScanExprContext(tmpcontext);
+		ReScanExprContext(econtext);
+		ReScanExprContext(aggstate->aggcontexts[currentSet]);
+		if (++currentSet >= numGroupingSets)
+			break;
+	}
+
+	if (TupIsNull(outerslot))
+	{
+		aggstate->agg_done = true;
+
+		/*
+		 * We're out of input, so the calling node has all the data it needs
+		 * and (if it's a Sort) is about to sort it. We preemptively request a
+		 * rescan of our input plan here, so that Sort nodes containing data
+		 * that is no longer needed will free their memory.  The intention here
+		 * is to bound the peak memory requirement for the whole chain to
+		 * 2*work_mem if REWIND was not requested, or 3*work_mem if REWIND was
+		 * requested and we had to supply a Sort node for the original data
+		 * source plan.
+		 */
+
+		ExecReScan(outerPlanState(aggstate));
+
+		return NULL;
+	}
+
+	/*
+	 * If this is the first tuple, store it and initialize everything.
+	 * Otherwise re-init any aggregates we projected above.
+	 */
+
+	if (TupIsNull(firstSlot))
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, numGroupingSets);
+	}
+	else if (currentSet > 0)
+	{
+		ExecCopySlot(firstSlot, outerslot);
+		initialize_aggregates(aggstate, peragg, pergroup, currentSet);
+	}
+
+	tmpcontext->ecxt_outertuple = outerslot;
+
+	/* Actually accumulate the current tuple. */
+	advance_aggregates(aggstate, pergroup);
+
+	/* Reset per-input-tuple context after each tuple */
+	ResetExprContext(tmpcontext);
+
+	return outerslot;
+}
+
 /*
  * ExecAgg for hashed case: phase 1, read input and build hash table
  */
@@ -1493,12 +1898,17 @@ AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
+	AggState   *save_chain_head = NULL;
 	AggStatePerAgg peragg;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	int			numaggs,
 				aggno;
 	ListCell   *l;
+	int			numGroupingSets = 1;
+	int			currentsortno = 0;
+	int			i = 0;
+	int			j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1512,40 +1922,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
+	aggstate->numsets = 0;
 	aggstate->eqfunctions = NULL;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
+	aggstate->chain_done = true;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->chain_depth = 0;
+	aggstate->chain_rescan = 0;
+	aggstate->chain_eflags = eflags & EXEC_FLAG_REWIND;
+	aggstate->chain_top = false;
+	aggstate->chain_head = NULL;
+	aggstate->chain_tuplestore = NULL;
+
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+		aggstate->numsets = numGroupingSets;
+		aggstate->gset_lengths = palloc(numGroupingSets * sizeof(int));
+		aggstate->grouped_cols = palloc(numGroupingSets * sizeof(Bitmapset *));
+
+		i = 0;
+		foreach(l, node->groupingSets)
+		{
+			int current_length = list_length(lfirst(l));
+			Bitmapset *cols = NULL;
+
+			/* planner forces this to be correct */
+			for (j = 0; j < current_length; ++j)
+				cols = bms_add_member(cols, node->grpColIdx[j]);
+
+			aggstate->grouped_cols[i] = cols;
+			aggstate->gset_lengths[i] = current_length;
+			++i;
+		}
+	}
+
+	aggstate->aggcontexts = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts (aggcontexts) replaces the standalone
+	 * memory context formerly used to hold transition values.  We cheat a
+	 * little by using ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontexts and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
-	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
-
-	/*
 	 * tuple table initialization
 	 */
 	ExecInitScanTupleSlot(estate, &aggstate->ss);
@@ -1561,24 +2009,78 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * that is true, we don't need to worry about evaluating the aggs in any
 	 * particular order.
 	 */
-	aggstate->ss.ps.targetlist = (List *)
-		ExecInitExpr((Expr *) node->plan.targetlist,
-					 (PlanState *) aggstate);
-	aggstate->ss.ps.qual = (List *)
-		ExecInitExpr((Expr *) node->plan.qual,
-					 (PlanState *) aggstate);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		AggState   *chain_head = estate->agg_chain_head;
+		Agg		   *chain_head_plan;
+
+		Assert(chain_head);
+
+		aggstate->chain_head = chain_head;
+		chain_head->chain_depth++;
+
+		chain_head_plan = (Agg *) chain_head->ss.ps.plan;
+
+		/*
+		 * If we reached the originally declared depth, we must be the "top"
+		 * (furthest from plan root) node in the chain.
+		 */
+		if (chain_head_plan->chain_depth == chain_head->chain_depth)
+			aggstate->chain_top = true;
+
+		/*
+		 * Snarf the real targetlist and qual from the chain head node
+		 */
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) chain_head_plan->plan.qual,
+						 (PlanState *) aggstate);
+	}
+	else
+	{
+		aggstate->ss.ps.targetlist = (List *)
+			ExecInitExpr((Expr *) node->plan.targetlist,
+						 (PlanState *) aggstate);
+		aggstate->ss.ps.qual = (List *)
+			ExecInitExpr((Expr *) node->plan.qual,
+						 (PlanState *) aggstate);
+	}
+
+	if (node->chain_depth > 0)
+	{
+		save_chain_head = estate->agg_chain_head;
+		estate->agg_chain_head = aggstate;
+		aggstate->chain_tuplestore = tuplestore_begin_heap(false, false, work_mem);
+		aggstate->chain_done = false;
+	}
 
 	/*
-	 * initialize child nodes
+	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
+	 *
+	 * If we have more than one associated ChainAggregate node, then we turn
+	 * off REWIND and restore it in the chain top, so that the intermediate
+	 * Sort nodes will discard their data on rescan.  This lets us put an upper
+	 * bound on the memory usage, even when we have a long chain of sorts (at
+	 * the cost of having to re-sort on rewind, which is why we don't do it
+	 * for only one node where no memory would be saved).
 	 */
-	if (node->aggstrategy == AGG_HASHED)
+	if (aggstate->chain_top)
+		eflags |= aggstate->chain_head->chain_eflags;
+	else if (node->aggstrategy == AGG_HASHED || node->chain_depth > 1)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
+	if (node->chain_depth > 0)
+	{
+		estate->agg_chain_head = save_chain_head;
+	}
+
 	/*
 	 * initialize source tuple type.
 	 */
@@ -1587,8 +2089,35 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Initialize result tuple type and projection info.
 	 */
-	ExecAssignResultTypeFromTL(&aggstate->ss.ps);
-	ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	if (node->aggstrategy == AGG_CHAINED)
+	{
+		PlanState  *head_ps = &aggstate->chain_head->ss.ps;
+		bool		hasoid;
+
+		/*
+		 * We must calculate this the same way that the chain head does,
+		 * regardless of intermediate nodes, for consistency
+		 */
+		if (!ExecContextForcesOids(head_ps, &hasoid))
+			hasoid = false;
+
+		ExecAssignResultType(&aggstate->ss.ps, ExecGetScanType(&aggstate->ss));
+		ExecSetSlotDescriptor(aggstate->hashslot,
+							  ExecTypeFromTL(head_ps->plan->targetlist, hasoid));
+		aggstate->ss.ps.ps_ProjInfo =
+			ExecBuildProjectionInfo(aggstate->ss.ps.targetlist,
+									aggstate->ss.ps.ps_ExprContext,
+									aggstate->hashslot,
+									NULL);
+
+		aggstate->chain_tuplestore = aggstate->chain_head->chain_tuplestore;
+		Assert(aggstate->chain_tuplestore);
+	}
+	else
+	{
+		ExecAssignResultTypeFromTL(&aggstate->ss.ps);
+		ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
+	}
 
 	aggstate->ss.ps.ps_TupFromTlist = false;
 
@@ -1649,7 +2178,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
+											  * numaggs
+											  * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1712,7 +2244,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstates = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstates[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2020,31 +2555,38 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->numsets, 1);
+	int			setno;
 
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			if (peraggstate->sortstates[setno])
+				tuplesort_end(peraggstate->sortstates[setno]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (setno = 0; setno < numGroupingSets; setno++)
+		ReScanExprContext(node->aggcontexts[setno]);
+
+	if (node->chain_tuplestore && node->chain_depth > 0)
+		tuplestore_end(node->chain_tuplestore);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2053,13 +2595,16 @@ void
 ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->numsets, 1);
+	int         setno;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2085,14 +2630,34 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstates[setno])
+			{
+				tuplesort_end(peraggstate->sortstates[setno]);
+				peraggstate->sortstates[setno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them. (We use rescan
+	 * rather than just reset because transfns may have registered callbacks
+	 * that need to be run now.)
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. This used to be an issue, but now, resetting a
+	 * context automatically deletes sub-contexts too.
+	 */
+
+	for (setno = 0; setno < numGroupingSets; setno++)
+	{
+		ReScanExprContext(node->aggcontexts[setno]);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2100,21 +2665,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2126,15 +2683,54 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		node->input_done = false;
 	}
 
 	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
+	 * If we're in a chain, let the chain head know whether we
+	 * rescanned. (This is nonsense if it happens as a result of chgParam,
+	 * but the chain head only cares about this when rescanning explicitly
+	 * when chgParam is empty.)
+	 */
+
+	if (aggnode->aggstrategy == AGG_CHAINED)
+		node->chain_head->chain_rescan++;
+
+	/*
+	 * If we're a chain head, we reset the tuplestore if parameters changed,
+	 * and let subplans repopulate it.
+	 *
+	 * If we're a chain head and the subplan parameters did NOT change, then
+	 * whether we need to reset the tuplestore depends on whether anything
+	 * (specifically the Sort nodes) protects the child ChainAggs from rescan.
+	 * Since this is hard to know in advance, we have the ChainAggs signal us
+	 * as to whether the reset is needed.  Since we're preempting the rescan
+	 * in some cases, we only check whether any ChainAgg node was reached in
+	 * the rescan; the others may have already been reset.
 	 */
-	if (node->ss.ps.lefttree->chgParam == NULL)
+	if (aggnode->chain_depth > 0)
+	{
+		if (node->ss.ps.lefttree->chgParam)
+			tuplestore_clear(node->chain_tuplestore);
+		else
+		{
+			node->chain_rescan = 0;
+
+			ExecReScan(node->ss.ps.lefttree);
+
+			if (node->chain_rescan > 0)
+				tuplestore_clear(node->chain_tuplestore);
+			else
+				tuplestore_rescan(node->chain_tuplestore);
+		}
+		node->chain_done = false;
+	}
+	else if (node->ss.ps.lefttree->chgParam == NULL)
+	{
 		ExecReScan(node->ss.ps.lefttree);
+	}
 }
 
 
@@ -2154,8 +2750,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2163,7 +2762,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2247,8 +2850,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 291e6a7..ff87476 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -804,6 +804,7 @@ _copyAgg(const Agg *from)
 	CopyPlanFields((const Plan *) from, (Plan *) newnode);
 
 	COPY_SCALAR_FIELD(aggstrategy);
+	COPY_SCALAR_FIELD(chain_depth);
 	COPY_SCALAR_FIELD(numCols);
 	if (from->numCols > 0)
 	{
@@ -811,6 +812,7 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
 
 	return newnode;
 }
@@ -1097,6 +1099,27 @@ _copyVar(const Var *from)
 }
 
 /*
+ * _copyGroupedVar
+ */
+static GroupedVar *
+_copyGroupedVar(const GroupedVar *from)
+{
+	GroupedVar		   *newnode = makeNode(GroupedVar);
+
+	COPY_SCALAR_FIELD(varno);
+	COPY_SCALAR_FIELD(varattno);
+	COPY_SCALAR_FIELD(vartype);
+	COPY_SCALAR_FIELD(vartypmod);
+	COPY_SCALAR_FIELD(varcollid);
+	COPY_SCALAR_FIELD(varlevelsup);
+	COPY_SCALAR_FIELD(varnoold);
+	COPY_SCALAR_FIELD(varoattno);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyConst
  */
 static Const *
@@ -1179,6 +1202,23 @@ _copyAggref(const Aggref *from)
 }
 
 /*
+ * _copyGroupingFunc
+ */
+static GroupingFunc *
+_copyGroupingFunc(const GroupingFunc *from)
+{
+	GroupingFunc	   *newnode = makeNode(GroupingFunc);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_SCALAR_FIELD(agglevelsup);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyWindowFunc
  */
 static WindowFunc *
@@ -2083,6 +2123,18 @@ _copySortGroupClause(const SortGroupClause *from)
 	return newnode;
 }
 
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
 static WindowClause *
 _copyWindowClause(const WindowClause *from)
 {
@@ -2545,6 +2597,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(withCheckOptions);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4153,6 +4206,9 @@ copyObject(const void *from)
 		case T_Var:
 			retval = _copyVar(from);
 			break;
+		case T_GroupedVar:
+			retval = _copyGroupedVar(from);
+			break;
 		case T_Const:
 			retval = _copyConst(from);
 			break;
@@ -4162,6 +4218,9 @@ copyObject(const void *from)
 		case T_Aggref:
 			retval = _copyAggref(from);
 			break;
+		case T_GroupingFunc:
+			retval = _copyGroupingFunc(from);
+			break;
 		case T_WindowFunc:
 			retval = _copyWindowFunc(from);
 			break;
@@ -4722,6 +4781,9 @@ copyObject(const void *from)
 		case T_SortGroupClause:
 			retval = _copySortGroupClause(from);
 			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_WindowClause:
 			retval = _copyWindowClause(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index fcd58ad..b34bf67 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -153,6 +153,22 @@ _equalVar(const Var *a, const Var *b)
 }
 
 static bool
+_equalGroupedVar(const GroupedVar *a, const GroupedVar *b)
+{
+	COMPARE_SCALAR_FIELD(varno);
+	COMPARE_SCALAR_FIELD(varattno);
+	COMPARE_SCALAR_FIELD(vartype);
+	COMPARE_SCALAR_FIELD(vartypmod);
+	COMPARE_SCALAR_FIELD(varcollid);
+	COMPARE_SCALAR_FIELD(varlevelsup);
+	COMPARE_SCALAR_FIELD(varnoold);
+	COMPARE_SCALAR_FIELD(varoattno);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalConst(const Const *a, const Const *b)
 {
 	COMPARE_SCALAR_FIELD(consttype);
@@ -208,6 +224,21 @@ _equalAggref(const Aggref *a, const Aggref *b)
 }
 
 static bool
+_equalGroupingFunc(const GroupingFunc *a, const GroupingFunc *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_SCALAR_FIELD(agglevelsup);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowFunc(const WindowFunc *a, const WindowFunc *b)
 {
 	COMPARE_SCALAR_FIELD(winfnoid);
@@ -870,6 +901,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(withCheckOptions);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2387,6 +2419,16 @@ _equalSortGroupClause(const SortGroupClause *a, const SortGroupClause *b)
 }
 
 static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowClause(const WindowClause *a, const WindowClause *b)
 {
 	COMPARE_STRING_FIELD(name);
@@ -2591,6 +2633,9 @@ equal(const void *a, const void *b)
 		case T_Var:
 			retval = _equalVar(a, b);
 			break;
+		case T_GroupedVar:
+			retval = _equalGroupedVar(a, b);
+			break;
 		case T_Const:
 			retval = _equalConst(a, b);
 			break;
@@ -2600,6 +2645,9 @@ equal(const void *a, const void *b)
 		case T_Aggref:
 			retval = _equalAggref(a, b);
 			break;
+		case T_GroupingFunc:
+			retval = _equalGroupingFunc(a, b);
+			break;
 		case T_WindowFunc:
 			retval = _equalWindowFunc(a, b);
 			break;
@@ -3147,6 +3195,9 @@ equal(const void *a, const void *b)
 		case T_SortGroupClause:
 			retval = _equalSortGroupClause(a, b);
 			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_WindowClause:
 			retval = _equalWindowClause(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 94cab47..a6737514 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 6fdf44d..a9b58eb 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index d6f1f5b..4caf559 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -45,6 +45,9 @@ exprType(const Node *expr)
 		case T_Var:
 			type = ((const Var *) expr)->vartype;
 			break;
+		case T_GroupedVar:
+			type = ((const GroupedVar *) expr)->vartype;
+			break;
 		case T_Const:
 			type = ((const Const *) expr)->consttype;
 			break;
@@ -54,6 +57,9 @@ exprType(const Node *expr)
 		case T_Aggref:
 			type = ((const Aggref *) expr)->aggtype;
 			break;
+		case T_GroupingFunc:
+			type = INT4OID;
+			break;
 		case T_WindowFunc:
 			type = ((const WindowFunc *) expr)->wintype;
 			break;
@@ -261,6 +267,8 @@ exprTypmod(const Node *expr)
 	{
 		case T_Var:
 			return ((const Var *) expr)->vartypmod;
+		case T_GroupedVar:
+			return ((const GroupedVar *) expr)->vartypmod;
 		case T_Const:
 			return ((const Const *) expr)->consttypmod;
 		case T_Param:
@@ -734,6 +742,9 @@ exprCollation(const Node *expr)
 		case T_Var:
 			coll = ((const Var *) expr)->varcollid;
 			break;
+		case T_GroupedVar:
+			coll = ((const GroupedVar *) expr)->varcollid;
+			break;
 		case T_Const:
 			coll = ((const Const *) expr)->constcollid;
 			break;
@@ -743,6 +754,9 @@ exprCollation(const Node *expr)
 		case T_Aggref:
 			coll = ((const Aggref *) expr)->aggcollid;
 			break;
+		case T_GroupingFunc:
+			coll = InvalidOid;
+			break;
 		case T_WindowFunc:
 			coll = ((const WindowFunc *) expr)->wincollid;
 			break;
@@ -967,6 +981,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Var:
 			((Var *) expr)->varcollid = collation;
 			break;
+		case T_GroupedVar:
+			((GroupedVar *) expr)->varcollid = collation;
+			break;
 		case T_Const:
 			((Const *) expr)->constcollid = collation;
 			break;
@@ -976,6 +993,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Aggref:
 			((Aggref *) expr)->aggcollid = collation;
 			break;
+		case T_GroupingFunc:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_WindowFunc:
 			((WindowFunc *) expr)->wincollid = collation;
 			break;
@@ -1182,6 +1202,9 @@ exprLocation(const Node *expr)
 		case T_Var:
 			loc = ((const Var *) expr)->location;
 			break;
+		case T_GroupedVar:
+			loc = ((const GroupedVar *) expr)->location;
+			break;
 		case T_Const:
 			loc = ((const Const *) expr)->location;
 			break;
@@ -1192,6 +1215,9 @@ exprLocation(const Node *expr)
 			/* function name should always be the first thing */
 			loc = ((const Aggref *) expr)->location;
 			break;
+		case T_GroupingFunc:
+			loc = ((const GroupingFunc *) expr)->location;
+			break;
 		case T_WindowFunc:
 			/* function name should always be the first thing */
 			loc = ((const WindowFunc *) expr)->location;
@@ -1481,6 +1507,9 @@ exprLocation(const Node *expr)
 			/* XMLSERIALIZE keyword should always be the first thing */
 			loc = ((const XmlSerialize *) expr)->location;
 			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_WithClause:
 			loc = ((const WithClause *) expr)->location;
 			break;
@@ -1632,6 +1661,7 @@ expression_tree_walker(Node *node,
 	switch (nodeTag(node))
 	{
 		case T_Var:
+		case T_GroupedVar:
 		case T_Const:
 		case T_Param:
 		case T_CoerceToDomainValue:
@@ -1665,6 +1695,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grouping = (GroupingFunc *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2154,6 +2193,15 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupedVar:
+			{
+				GroupedVar         *groupedvar = (GroupedVar *) node;
+				GroupedVar		   *newnode;
+
+				FLATCOPY(newnode, groupedvar, GroupedVar);
+				return (Node *) newnode;
+			}
+			break;
 		case T_Const:
 			{
 				Const	   *oldnode = (Const *) node;
@@ -2195,6 +2243,29 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc   *grouping = (GroupingFunc *) node;
+				GroupingFunc   *newnode;
+
+				FLATCOPY(newnode, grouping, GroupingFunc);
+				MUTATE(newnode->args, grouping->args, List *);
+
+				/*
+				 * We assume here that mutating the arguments does not change
+				 * the semantics, i.e. that the arguments are not mutated in a
+				 * way that makes them semantically different from their
+				 * previously matching expressions in the GROUP BY clause.
+				 *
+				 * If a mutator somehow wanted to do this, it would have to
+				 * handle the refs and cols lists itself as appropriate.
+				 */
+				newnode->refs = list_copy(grouping->refs);
+				newnode->cols = list_copy(grouping->cols);
+
+				return (Node *) newnode;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
@@ -2880,6 +2951,8 @@ raw_expression_tree_walker(Node *node,
 			break;
 		case T_RangeVar:
 			return walker(((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return walker(((GroupingFunc *) node)->args, context);
 		case T_SubLink:
 			{
 				SubLink    *sublink = (SubLink *) node;
@@ -3203,6 +3276,8 @@ raw_expression_tree_walker(Node *node,
 				/* for now, constraints are ignored */
 			}
 			break;
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		case T_LockingClause:
 			return walker(((LockingClause *) node)->lockedRels, context);
 		case T_XmlSerialize:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index fc418fc..a449269 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -648,6 +648,7 @@ _outAgg(StringInfo str, const Agg *node)
 	_outPlanInfo(str, (const Plan *) node);
 
 	WRITE_ENUM_FIELD(aggstrategy, AggStrategy);
+	WRITE_INT_FIELD(chain_depth);
 	WRITE_INT_FIELD(numCols);
 
 	appendStringInfoString(str, " :grpColIdx");
@@ -659,6 +660,8 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
 }
 
 static void
@@ -928,6 +931,22 @@ _outVar(StringInfo str, const Var *node)
 }
 
 static void
+_outGroupedVar(StringInfo str, const GroupedVar *node)
+{
+	WRITE_NODE_TYPE("GROUPEDVAR");
+
+	WRITE_UINT_FIELD(varno);
+	WRITE_INT_FIELD(varattno);
+	WRITE_OID_FIELD(vartype);
+	WRITE_INT_FIELD(vartypmod);
+	WRITE_OID_FIELD(varcollid);
+	WRITE_UINT_FIELD(varlevelsup);
+	WRITE_UINT_FIELD(varnoold);
+	WRITE_INT_FIELD(varoattno);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outConst(StringInfo str, const Const *node)
 {
 	WRITE_NODE_TYPE("CONST");
@@ -982,6 +1001,18 @@ _outAggref(StringInfo str, const Aggref *node)
 }
 
 static void
+_outGroupingFunc(StringInfo str, const GroupingFunc *node)
+{
+	WRITE_NODE_TYPE("GROUPINGFUNC");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_INT_FIELD(agglevelsup);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowFunc(StringInfo str, const WindowFunc *node)
 {
 	WRITE_NODE_TYPE("WINDOWFUNC");
@@ -2314,6 +2345,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(withCheckOptions);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2348,6 +2380,16 @@ _outSortGroupClause(StringInfo str, const SortGroupClause *node)
 }
 
 static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowClause(StringInfo str, const WindowClause *node)
 {
 	WRITE_NODE_TYPE("WINDOWCLAUSE");
@@ -2992,6 +3034,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Var:
 				_outVar(str, obj);
 				break;
+			case T_GroupedVar:
+				_outGroupedVar(str, obj);
+				break;
 			case T_Const:
 				_outConst(str, obj);
 				break;
@@ -3001,6 +3046,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Aggref:
 				_outAggref(str, obj);
 				break;
+			case T_GroupingFunc:
+				_outGroupingFunc(str, obj);
+				break;
 			case T_WindowFunc:
 				_outWindowFunc(str, obj);
 				break;
@@ -3258,6 +3306,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_SortGroupClause:
 				_outSortGroupClause(str, obj);
 				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_WindowClause:
 				_outWindowClause(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 563209c..b35a9d3 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -216,6 +216,7 @@ _readQuery(void)
 	READ_NODE_FIELD(withCheckOptions);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -291,6 +292,21 @@ _readSortGroupClause(void)
 }
 
 /*
+ * _readGroupingSet
+ */
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowClause
  */
 static WindowClause *
@@ -441,6 +457,27 @@ _readVar(void)
 }
 
 /*
+ * _readGroupedVar
+ */
+static GroupedVar *
+_readGroupedVar(void)
+{
+	READ_LOCALS(GroupedVar);
+
+	READ_UINT_FIELD(varno);
+	READ_INT_FIELD(varattno);
+	READ_OID_FIELD(vartype);
+	READ_INT_FIELD(vartypmod);
+	READ_OID_FIELD(varcollid);
+	READ_UINT_FIELD(varlevelsup);
+	READ_UINT_FIELD(varnoold);
+	READ_INT_FIELD(varoattno);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readConst
  */
 static Const *
@@ -510,6 +547,23 @@ _readAggref(void)
 }
 
 /*
+ * _readGroupingFunc
+ */
+static GroupingFunc *
+_readGroupingFunc(void)
+{
+	READ_LOCALS(GroupingFunc);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_INT_FIELD(agglevelsup);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowFunc
  */
 static WindowFunc *
@@ -1307,6 +1361,8 @@ parseNodeString(void)
 		return_value = _readWithCheckOption();
 	else if (MATCH("SORTGROUPCLAUSE", 15))
 		return_value = _readSortGroupClause();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("WINDOWCLAUSE", 12))
 		return_value = _readWindowClause();
 	else if (MATCH("ROWMARKCLAUSE", 13))
@@ -1323,12 +1379,16 @@ parseNodeString(void)
 		return_value = _readIntoClause();
 	else if (MATCH("VAR", 3))
 		return_value = _readVar();
+	else if (MATCH("GROUPEDVAR", 10))
+		return_value = _readGroupedVar();
 	else if (MATCH("CONST", 5))
 		return_value = _readConst();
 	else if (MATCH("PARAM", 5))
 		return_value = _readParam();
 	else if (MATCH("AGGREF", 6))
 		return_value = _readAggref();
+	else if (MATCH("GROUPINGFUNC", 12))
+		return_value = _readGroupingFunc();
 	else if (MATCH("WINDOWFUNC", 10))
 		return_value = _readWindowFunc();
 	else if (MATCH("ARRAYREF", 8))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 58d78e6..2c05f71 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1241,6 +1241,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2099,7 +2100,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 49ab366..3b78f24 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -1952,7 +1952,8 @@ adjust_rowcount_for_semijoins(PlannerInfo *root,
 			nraw = approximate_joinrel_size(root, sjinfo->syn_righthand);
 			nunique = estimate_num_groups(root,
 										  sjinfo->semi_rhs_exprs,
-										  nraw);
+										  nraw,
+										  NULL);
 			if (rowcount > nunique)
 				rowcount = nunique;
 		}
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 11d3933..fa1de6a 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -581,6 +581,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -649,10 +650,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -668,6 +669,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1
+			&& ((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index cb69c03..7b2e390 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1029,6 +1029,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
+								 NULL,
 								 numGroups,
 								 subplan);
 	}
@@ -4360,6 +4362,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets, int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4369,6 +4372,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	QualCost	qual_cost;
 
 	node->aggstrategy = aggstrategy;
+	node->chain_depth = chain_depth_p ? *chain_depth_p : 0;
 	node->numCols = numGroupCols;
 	node->grpColIdx = grpColIdx;
 	node->grpOperators = grpOperators;
@@ -4389,10 +4393,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
@@ -4411,8 +4417,21 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	}
 	add_tlist_costs_to_plan(root, plan, tlist);
 
-	plan->qual = qual;
-	plan->targetlist = tlist;
+	if (aggstrategy == AGG_CHAINED)
+	{
+		Assert(!chain_depth_p);
+		plan->plan_rows = lefttree->plan_rows;
+		plan->plan_width = lefttree->plan_width;
+
+		/* supplied tlist is ignored, this is dummy */
+		plan->targetlist = lefttree->targetlist;
+		plan->qual = NULL;
+	}
+	else
+	{
+		plan->qual = qual;
+		plan->targetlist = tlist;
+	}
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index af772a2..f0e9c05 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 88b91f1..edb4d2b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,12 +16,14 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -37,6 +39,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -65,6 +68,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -77,7 +81,9 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -317,6 +323,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->groupColIdx = NULL;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -532,7 +540,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1192,11 +1201,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1205,15 +1209,90 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		bool		use_hashed_grouping = false;
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
+		int			maxref = 0;
+		List	   *refmaps = NIL;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
 		if (parse->groupClause)
-			preprocess_groupclause(root);
+		{
+			ListCell   *lc;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+		}
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
+
+			foreach(lc_set, sets)
+			{
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+				int	   *refmap;
+
+				/*
+				 * Now that we've pinned down an order for the groupClause for this
+				 * list of grouping sets, remap the entries in the grouping sets
+				 * from sortgrouprefs to plain indices into the groupClause.
+				 */
+
+				refmap = palloc0(sizeof(int) * (maxref + 1));
+
+				foreach(lc, groupclause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					refmap[gc->tleSortGroupRef] = ++ref;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						Assert(refmap[lfirst_int(lc2)] > 0);
+						lfirst_int(lc2) = refmap[lfirst_int(lc2)] - 1;
+					}
+				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+				refmaps = lcons(refmap, refmaps);
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1286,6 +1365,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1297,6 +1377,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = linitial(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1323,15 +1404,46 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * to describe the fraction of the underlying un-aggregated tuples
 		 * that will be fetched.
 		 */
+
 		dNumGroups = 1;			/* in case not grouping */
 
 		if (parse->groupClause)
 		{
 			List	   *groupExprs;
 
-			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
-												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc,
+						   *lc2;
+
+				dNumGroups = 0;
+
+				forboth(lc, rollup_groupclauses, lc2, rollup_lists)
+				{
+					ListCell   *lc3;
+
+					groupExprs = get_sortgrouplist_exprs(lfirst(lc),
+														 parse->targetList);
+
+					foreach(lc3, lfirst(lc2))
+					{
+						List   *gset = lfirst(lc3);
+
+						dNumGroups += estimate_num_groups(root,
+														  groupExprs,
+														  path_rows,
+														  &gset);
+					}
+				}
+			}
+			else
+			{
+				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
+													 parse->targetList);
+
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
+			}
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1342,6 +1454,9 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			if (tuple_fraction >= 1.0)
 				tuple_fraction /= dNumGroups;
 
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
 			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
@@ -1357,14 +1472,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
-			 * and it will deliver a single result row (so leave dNumGroups
-			 * set to 1).
+			 * and it will deliver a single result row per grouping set (or 1
+			 * if no grouping sets were explicitly given, in which case leave
+			 * dNumGroups as-is)
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1379,7 +1497,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1462,13 +1580,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping.
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1534,7 +1663,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1609,52 +1738,118 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
+												NIL,
+												NULL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
+				int			chain_depth = 0;
 
-				if (parse->groupClause)
+				/*
+				 * If we need multiple grouping nodes, start stacking them up;
+				 * all except the last are chained.
+				 */
+
+				do
 				{
-					if (need_sort_for_grouping)
+					List	   *groupClause = linitial(rollup_groupclauses);
+					List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+					int		   *refmap = refmaps ? linitial(refmaps) : NULL;
+					AttrNumber *new_grpColIdx = groupColIdx;
+					ListCell   *lc;
+					int			i;
+					AggStrategy aggstrategy = AGG_CHAINED;
+
+					if (groupClause)
+					{
+						if (gsets)
+						{
+							Assert(refmap);
+
+							/*
+							 * need to remap groupColIdx, which has the column
+							 * indices for every clause in parse->groupClause
+							 * indexed by list position, to a local version for
+							 * this node which lists only the clauses included in
+							 * groupClause by position in that list. The refmap for
+							 * this node (indexed by sortgroupref) contains 0 for
+							 * clauses not present in this node's groupClause.
+							 */
+
+							new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(linitial(gsets)));
+
+							i = 0;
+							foreach(lc, parse->groupClause)
+							{
+								int j = refmap[((SortGroupClause *)lfirst(lc))->tleSortGroupRef];
+								if (j > 0)
+									new_grpColIdx[j - 1] = groupColIdx[i];
+								++i;
+							}
+						}
+
+						if (need_sort_for_grouping)
+						{
+							result_plan = (Plan *)
+								make_sort_from_groupcols(root,
+														 groupClause,
+														 new_grpColIdx,
+														 result_plan);
+						}
+						else
+							need_sort_for_grouping = true;
+
+						if (list_length(rollup_groupclauses) == 1)
+						{
+							aggstrategy = AGG_SORTED;
+
+							/*
+							 * If there aren't any other chained aggregates, then
+							 * we didn't disturb the originally required input
+							 * sort order.
+							 */
+							if (chain_depth == 0)
+								current_pathkeys = root->group_pathkeys;
+						}
+						else
+							current_pathkeys = NIL;
+					}
+					else
 					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
+						aggstrategy = AGG_PLAIN;
+						current_pathkeys = NIL;
 					}
-					aggstrategy = AGG_SORTED;
 
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
-				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
-					current_pathkeys = NIL;
-				}
+					result_plan = (Plan *) make_agg(root,
+													tlist,
+													(List *) parse->havingQual,
+													aggstrategy,
+													&agg_costs,
+													gsets ? list_length(linitial(gsets)) : numGroupCols,
+													new_grpColIdx,
+													extract_grouping_ops(groupClause),
+													gsets,
+													(aggstrategy != AGG_CHAINED) ? &chain_depth : NULL,
+													numGroups,
+													result_plan);
+
+					chain_depth += 1;
 
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												numGroups,
-												result_plan);
+					if (refmap)
+						pfree(refmap);
+					if (rollup_lists)
+						rollup_lists = list_delete_first(rollup_lists);
+					if (refmaps)
+						refmaps = list_delete_first(refmaps);
+
+					rollup_groupclauses = list_delete_first(rollup_groupclauses);
+				}
+				while (rollup_groupclauses);
 			}
 			else if (parse->groupClause)
 			{
@@ -1685,27 +1880,66 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
+		/* Record grouping_map based on final groupColIdx, for setrefs */
+
+		if (parse->groupingSets)
+		{
+			AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+			ListCell   *lc;
+			int			i = 0;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+			}
+
+			root->groupColIdx = groupColIdx;
+			root->grouping_map = grouping_map;
+		}
+
 		/*
 		 * Since each window function could require a different sort order, we
 		 * stack up a WindowAgg node for each window, with sort steps between
@@ -1868,7 +2102,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1909,6 +2143,8 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
+											NULL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2542,19 +2778,38 @@ limit_needed(Query *parse)
  *
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
+ *
+ * For grouping sets, the order of items is instead forced to agree with that
+ * of the grouping set (and items not in the grouping set are skipped). The
+ * work of sorting the order of grouping set elements to match the ORDER BY if
+ * possible is done elsewhere.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2562,7 +2817,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2586,7 +2840,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2603,15 +2857,446 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+
+/*
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)
+ *
+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.  This implementation is based on pseudocode
+ * found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};
+
+static bool
+hk_breadth_search(struct hk_state *state)
+{
+	int			gsize = state->graph_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u < gsize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= gsize+1);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(struct hk_state *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
+
+static struct hk_state *
+hk_match(int graph_size, short **adjacency)
+{
+	struct hk_state *state = palloc(sizeof(struct hk_state));
+
+	state->graph_size = graph_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0(graph_size * sizeof(short));
+	state->pair_vu = palloc0(graph_size * sizeof(short));
+	state->distance = palloc(graph_size * sizeof(float));
+	state->queue = palloc((graph_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u < graph_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+static void
+hk_free(struct hk_state *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
+ *
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
+ */
+
+static List *
+extract_rollup_sets(List *groupingSets)
+{
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
+	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	struct hk_state *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
+
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+
+	while (lc1 && lfirst(lc1) == NIL)
+	{
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
+
+		foreach(lc2, candidate)
+		{
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
+			{
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
+			}
+		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
+
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
+		{
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
+		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the matching algorithm to do the work.
+	 */
+
+	state = hk_match(num_sets + 1, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
+		else
+			chains[i] = ++num_chains;
+	}
+
+	/* build result lists. */
+
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
+	/*
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have tied
+	 * up a nontrivial amount of memory.)
+	 */
+
+	hk_free(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
+
+	foreach(lc, groupingsets)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					/* diverged from the sortclause; give up on it */
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+
+	return result;
 }
 
 /*
@@ -2630,11 +3315,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
@@ -3059,7 +3744,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index ec828cd..11c9e82 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -67,6 +67,12 @@ typedef struct
 	int			rtoffset;
 } fix_upper_expr_context;
 
+typedef struct
+{
+	PlannerInfo *root;
+	Bitmapset   *groupedcols;
+} set_group_vars_context;
+
 /*
  * Check if a Const node is a regclass value.  We accept plain OID too,
  * since a regclass Const will get folded to that type if it's an argument
@@ -133,6 +139,8 @@ static List *set_returning_clause_references(PlannerInfo *root,
 static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
+static void set_group_vars(PlannerInfo *root, Agg *agg);
+static Node *set_group_vars_mutator(Node *node, set_group_vars_context *context);
 
 
 /*****************************************************************************
@@ -660,6 +668,17 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			if (((Agg *) plan)->aggstrategy == AGG_CHAINED)
+			{
+				/* chained agg does not evaluate tlist */
+				set_dummy_tlist_references(plan, rtoffset);
+			}
+			else
+			{
+				set_upper_references(root, plan, rtoffset);
+				set_group_vars(root, (Agg *) plan);
+			}
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1074,6 +1093,7 @@ copyVar(Var *var)
  * We must look up operator opcode info for OpExpr and related nodes,
  * add OIDs from regclass Const nodes into root->glob->relationOids, and
  * add catalog TIDs for user-defined functions into root->glob->invalItems.
+ * We also fill in column index lists for GROUPING() expressions.
  *
  * We assume it's okay to update opcode info in-place.  So this could possibly
  * scribble on the planner's input data structures, but it's OK.
@@ -1137,6 +1157,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *g = (GroupingFunc *) node;
+		AttrNumber *refmap = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(refmap || g->cols == NIL);
+
+		if (refmap)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, refmap[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -1264,6 +1309,98 @@ fix_scan_expr_walker(Node *node, fix_scan_expr_context *context)
 								  (void *) context);
 }
 
+
+/*
+ * set_group_vars
+ *    Modify any Var references in the target list of a non-trivial
+ *    (i.e. contains grouping sets) Agg node to use GroupedVar instead,
+ *    which will conditionally replace them with nulls at runtime.
+ */
+static void
+set_group_vars(PlannerInfo *root, Agg *agg)
+{
+	set_group_vars_context context;
+	AttrNumber *groupColIdx = root->groupColIdx;
+	int			numCols = list_length(root->parse->groupClause);
+	int 		i;
+	Bitmapset  *cols = NULL;
+
+	if (!agg->groupingSets)
+		return;
+
+	if (!groupColIdx)
+	{
+		Assert(numCols == agg->numCols);
+		groupColIdx = agg->grpColIdx;
+	}
+
+	context.root = root;
+
+	for (i = 0; i < numCols; ++i)
+		cols = bms_add_member(cols, groupColIdx[i]);
+
+	context.groupedcols = cols;
+
+	agg->plan.targetlist = (List *) set_group_vars_mutator((Node *) agg->plan.targetlist,
+														   &context);
+	agg->plan.qual = (List *) set_group_vars_mutator((Node *) agg->plan.qual,
+													 &context);
+}
+
+static Node *
+set_group_vars_mutator(Node *node, set_group_vars_context *context)
+{
+	if (node == NULL)
+		return NULL;
+	if (IsA(node, Var))
+	{
+		Var *var = (Var *) node;
+
+		if (var->varno == OUTER_VAR
+			&& bms_is_member(var->varattno, context->groupedcols))
+		{
+			var = copyVar(var);
+			var->xpr.type = T_GroupedVar;
+		}
+
+		return (Node *) var;
+	}
+	else if (IsA(node, Aggref))
+	{
+		/*
+		 * don't recurse into the arguments or filter of Aggrefs, since they
+		 * see the values prior to grouping.  But do recurse into direct args
+		 * if any.
+		 */
+
+		if (((Aggref *)node)->aggdirectargs != NIL)
+		{
+			Aggref *newnode = palloc(sizeof(Aggref));
+
+			memcpy(newnode, node, sizeof(Aggref));
+
+			newnode->aggdirectargs
+				= (List *) expression_tree_mutator((Node *) newnode->aggdirectargs,
+												   set_group_vars_mutator,
+												   (void *) context);
+
+			return (Node *) newnode;
+		}
+
+		return node;
+	}
+	else if (IsA(node, GroupingFunc))
+	{
+		/*
+		 * GroupingFuncs don't see the values at all.
+		 */
+		return node;
+	}
+	return expression_tree_mutator(node, set_group_vars_mutator,
+								   (void *) context);
+}
+
+
 /*
  * set_join_references
  *	  Modify the target list and quals of a join node to reference its
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index acfd0bc..7a04e25 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -79,7 +79,8 @@ static Node *process_sublinks_mutator(Node *node,
 static Bitmapset *finalize_plan(PlannerInfo *root,
 			  Plan *plan,
 			  Bitmapset *valid_params,
-			  Bitmapset *scan_params);
+			  Bitmapset *scan_params,
+			  Agg *agg_chain_head);
 static bool finalize_primnode(Node *node, finalize_primnode_context *context);
 
 
@@ -336,6 +337,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given GroupingFunc expression which is
+ * expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the GroupingFunc belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (GroupingFunc *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1494,14 +1537,16 @@ simplify_EXISTS_query(PlannerInfo *root, Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, OFFSET, or FOR UPDATE/SHARE; none
-	 * of these seem likely in normal usage and their possible effects are
-	 * complex.  (Note: we could ignore an "OFFSET 0" clause, but that
-	 * traditionally is used as an optimization fence, so we don't.)
+	 * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.  (Note: we could ignore an "OFFSET 0"
+	 * clause, but that traditionally is used as an optimization fence, so we
+	 * don't.)
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1851,6 +1896,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (GroupingFunc *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
@@ -2081,7 +2131,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
 	/*
 	 * Now recurse through plan tree.
 	 */
-	(void) finalize_plan(root, plan, valid_params, NULL);
+	(void) finalize_plan(root, plan, valid_params, NULL, NULL);
 
 	bms_free(valid_params);
 
@@ -2132,7 +2182,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
  */
 static Bitmapset *
 finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
-			  Bitmapset *scan_params)
+			  Bitmapset *scan_params, Agg *agg_chain_head)
 {
 	finalize_primnode_context context;
 	int			locally_added_param;
@@ -2347,7 +2397,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2363,7 +2414,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2379,7 +2431,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2395,7 +2448,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2411,7 +2465,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  valid_params,
-													  scan_params));
+													  scan_params,
+													  NULL));
 				}
 			}
 			break;
@@ -2478,8 +2533,30 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 							  &context);
 			break;
 
-		case T_Hash:
 		case T_Agg:
+			{
+				Agg	   *agg = (Agg *) plan;
+
+				if (agg->aggstrategy == AGG_CHAINED)
+				{
+					Assert(agg_chain_head);
+
+					/*
+					 * our real tlist and qual are the ones in the chain head,
+					 * not the local ones which are dummy for passthrough.
+					 * Fortunately we can call finalize_primnode more than
+					 * once.
+					 */
+
+					finalize_primnode((Node *) agg_chain_head->plan.targetlist, &context);
+					finalize_primnode((Node *) agg_chain_head->plan.qual, &context);
+				}
+				else if (agg->chain_depth > 0)
+					agg_chain_head = agg;
+			}
+			break;
+
+		case T_Hash:
 		case T_Material:
 		case T_Sort:
 		case T_Unique:
@@ -2496,7 +2573,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 	child_params = finalize_plan(root,
 								 plan->lefttree,
 								 valid_params,
-								 scan_params);
+								 scan_params,
+								 agg_chain_head);
 	context.paramids = bms_add_members(context.paramids, child_params);
 
 	if (nestloop_params)
@@ -2505,7 +2583,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 bms_union(nestloop_params, valid_params),
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 		/* ... and they don't count as parameters used at my level */
 		child_params = bms_difference(child_params, nestloop_params);
 		bms_free(nestloop_params);
@@ -2516,7 +2595,8 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		child_params = finalize_plan(root,
 									 plan->righttree,
 									 valid_params,
-									 scan_params);
+									 scan_params,
+									 agg_chain_head);
 	}
 	context.paramids = bms_add_members(context.paramids, child_params);
 
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 50acfe4..8b6bcd9 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1409,6 +1409,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index b90fee3..d8a6391 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,8 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
+								 NULL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 84d58ae..ccbc670 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4307,6 +4307,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index faca30b..9190f84 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1194,7 +1194,8 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	/* Estimate number of output rows */
 	pathnode->path.rows = estimate_num_groups(root,
 											  sjinfo->semi_rhs_exprs,
-											  rel->rows);
+											  rel->rows,
+											  NULL);
 	numCols = list_length(sjinfo->semi_rhs_exprs);
 
 	if (sjinfo->semi_can_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index a1a504b..f702b8c 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index 8f86432..0f25539 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index a68f2e8..fe93b87 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -964,6 +964,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1010,7 +1011,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1470,7 +1471,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index cf0d317..c1eed20 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -367,6 +367,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				create_generic_options alter_generic_options
 				relation_expr_list dostmt_opt_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -432,7 +436,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -557,7 +561,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONNECTION CONSTRAINT CONSTRAINTS
 	CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -572,7 +576,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -606,11 +610,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE
 	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
@@ -671,6 +675,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -681,7 +690,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %left		'+' '-'
 %left		'*' '/' '%'
@@ -10132,11 +10141,79 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
+
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11759,15 +11836,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| implicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  GroupingFunc *g = makeNode(GroupingFunc);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12517,6 +12612,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13325,6 +13427,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13473,6 +13576,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13487,6 +13591,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13570,6 +13675,7 @@ col_name_keyword:
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 7b0e668..19391d0 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,96 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/* transformGroupingFunc
+ * Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingFunc(ParseState *pstate, GroupingFunc *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	GroupingFunc *result = makeNode(GroupingFunc);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		GroupingFunc *grp = (GroupingFunc *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +334,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +377,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +419,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +478,22 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
+				 /* translator: %s is name of a SQL construct, eg GROUP BY */
+				 errmsg(isAgg
+						? "aggregate functions are not allowed in %s"
+						: "grouping operations are not allowed in %s",
 						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
 }
 
 /*
@@ -507,6 +647,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		int			agglevelsup = ((GroupingFunc *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +682,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +926,67 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("Too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = linitial(gsets);
+
+		if (gset_common)
+		{
+			for_each_cell(l, lnext(list_head(gsets)))
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1006,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1040,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets
+				 || list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1072,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1132,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1196,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/* handled GroupingFunc separately, no need to recheck at this level */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1217,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,13 +1246,15 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *)lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
 					gvar->varattno == var->varattno &&
 					gvar->varlevelsup == 0)
+				{
 					return false;		/* acceptable, we're okay */
+				}
 			}
 		}
 
@@ -1040,7 +1285,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1330,396 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/*
+		 * We only need to check GroupingFunc nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List 	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var	  		*gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping
+						 && context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("Arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_asc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, shortest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 8d90b50..b965b64 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -36,6 +36,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1664,40 +1665,182 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+
+/*
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
+ *
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
+ *
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
+ */
+
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
+{
+	/* just in case of pathological input */
+	check_stack_depth();
+
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
+	{
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
+
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
+
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Transform a single expression within a GROUP BY clause or grouping set.
+ *
+ * The expression is added to the targetlist if not already present, and to the
+ * flatresult list (which will become the groupClause) if not already present
+ * there.  The sortClause is consulted for operator and sort order hints.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * Returns the ressortgroupref of the expression.
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * seen_local	bitmapset of sortgrouprefs already seen at the local level
+ * pstate		ParseState
+ * gexpr		node to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	TargetEntry *tle;
+	bool		found = false;
 
-	foreach(gl, grouplist)
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
+
+	if (tle->ressortgroupref > 0)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
-
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
-
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+		ListCell   *sl;
+
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
+
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1709,35 +1852,308 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
+
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+/*
+ * Transform a list of expressions within a GROUP BY clause or grouping set.
+ *
+ * The list of expressions belongs to a single clause within which duplicates
+ * can be safely eliminated.
+ *
+ * Returns an integer list of ressortgroupref values.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * list			nodes to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Transform a grouping set and (recursively) its content.
+ *
+ * The grouping set might be a GROUPING SETS node with other grouping sets
+ * inside it, but SETS within SETS have already been flattened out before
+ * reaching here.
+ *
+ * Returns the transformed node, which now contains SIMPLE nodes with lists
+ * of ressortgrouprefs rather than expressions.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * gset			grouping set to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ *
+ * Returns the transformed (flat) groupClause.
+ *
+ * pstate		ParseState
+ * grouplist	clause to transform
+ * groupingSets	reference to list to contain the grouping set tree
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1842,6 +2258,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index f759606..0ff46dd 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -269,6 +270,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 			result = transformMultiAssignRef(pstate, (MultiAssignRef *) expr);
 			break;
 
+		case T_GroupingFunc:
+			result = transformGroupingFunc(pstate, (GroupingFunc *) expr);
+			break;
+
 		case T_NamedArgExpr:
 			{
 				NamedArgExpr *na = (NamedArgExpr *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 2d85cf0..e92a4e1 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1680,6 +1680,10 @@ FigureColnameInternal(Node *node, char **name)
 			break;
 		case T_CollateClause:
 			return FigureColnameInternal(((CollateClause *) node)->arg, name);
+		case T_GroupingFunc:
+			/* make GROUPING() act like a regular function */
+			*name = "grouping";
+			return 2;
 		case T_SubLink:
 			switch (((SubLink *) node)->subLinkType)
 			{
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index 9d2c280..0474c9c 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2109,7 +2109,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index df45708..8309010 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,12 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up)
+			return true;
+		/* else fall through to examine argument */
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +163,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up &&
+			((GroupingFunc *) node)->location >= 0)
+		{
+			context->agg_location = ((GroupingFunc *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -703,6 +718,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc   *grp = (GroupingFunc *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 2fa30be..e03b7c6 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -42,6 +42,7 @@
 #include "nodes/nodeFuncs.h"
 #include "optimizer/tlist.h"
 #include "parser/keywords.h"
+#include "parser/parse_node.h"
 #include "parser/parse_agg.h"
 #include "parser/parse_func.h"
 #include "parser/parse_oper.h"
@@ -103,6 +104,8 @@ typedef struct
 	int			wrapColumn;		/* max line length, or -1 for no limit */
 	int			indentLevel;	/* current indent level for prettyprint */
 	bool		varprefix;		/* TRUE to print prefixes on Vars */
+	ParseExprKind special_exprkind;	/* set only for exprkinds needing */
+									/* special handling */
 } deparse_context;
 
 /*
@@ -361,9 +364,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -411,8 +416,9 @@ static void printSubscripts(ArrayRef *aref, deparse_context *context);
 static char *get_relation_name(Oid relid);
 static char *generate_relation_name(Oid relid, List *namespaces);
 static char *generate_function_name(Oid funcid, int nargs,
-					   List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p);
+							List *argnames, Oid *argtypes,
+							bool has_variadic, bool *use_variadic_p,
+							ParseExprKind special_exprkind);
 static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2);
 static text *string_to_text(char *str);
 static char *flatten_reloptions(Oid relid);
@@ -870,6 +876,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 		context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		get_rule_expr(qual, &context, false);
 
@@ -879,7 +886,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 	appendStringInfo(&buf, "EXECUTE PROCEDURE %s(",
 					 generate_function_name(trigrec->tgfoid, 0,
 											NIL, NULL,
-											false, NULL));
+											false, NULL, EXPR_KIND_NONE));
 
 	if (trigrec->tgnargs > 0)
 	{
@@ -2476,6 +2483,7 @@ deparse_expression_pretty(Node *expr, List *dpcontext,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = WRAP_COLUMN_DEFAULT;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	get_rule_expr(expr, &context, showimplicit);
 
@@ -4073,6 +4081,7 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc,
 		context.prettyFlags = prettyFlags;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		set_deparse_for_query(&dpns, query, NIL);
 
@@ -4224,6 +4233,7 @@ get_query_def(Query *query, StringInfo buf, List *parentnamespace,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = wrapColumn;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	set_deparse_for_query(&dpns, query, parentnamespace);
 
@@ -4589,7 +4599,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4614,20 +4624,43 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
+		ParseExprKind	save_exprkind;
+
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		save_exprkind = context->special_exprkind;
+		context->special_exprkind = EXPR_KIND_GROUP_BY;
+
+		if (query->groupingSets == NIL)
+		{
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
+		}
+		else
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
 		}
+
+		context->special_exprkind = save_exprkind;
 	}
 
 	/* Add the HAVING clause if given */
@@ -4694,7 +4727,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var) || IsA(tle->expr, GroupedVar)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4913,23 +4946,24 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4938,13 +4972,92 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -4964,7 +5077,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5064,7 +5177,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -5613,10 +5726,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5638,10 +5751,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -5661,10 +5774,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		return NULL;
@@ -5704,10 +5817,10 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
 		 * Force parentheses because our caller probably assumed a Var is a
 		 * simple expression.
 		 */
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, '(');
 		get_rule_expr((Node *) tle->expr, context, true);
-		if (!IsA(tle->expr, Var))
+		if (!IsA(tle->expr, Var) && !IsA(tle->expr, GroupedVar))
 			appendStringInfoChar(buf, ')');
 
 		pop_child_plan(dpns, &save_dpns);
@@ -6738,6 +6851,10 @@ get_rule_expr(Node *node, deparse_context *context,
 			(void) get_variable((Var *) node, 0, false, context);
 			break;
 
+		case T_GroupedVar:
+			(void) get_variable((Var *) node, 0, false, context);
+			break;
+
 		case T_Const:
 			get_const_expr((Const *) node, context, 0);
 			break;
@@ -6750,6 +6867,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			get_agg_expr((Aggref *) node, context);
 			break;
 
+		case T_GroupingFunc:
+			{
+				GroupingFunc *gexpr = (GroupingFunc *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_WindowFunc:
 			get_windowfunc_expr((WindowFunc *) node, context);
 			break;
@@ -7788,7 +7915,8 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
 					 generate_function_name(funcoid, nargs,
 											argnames, argtypes,
 											expr->funcvariadic,
-											&use_variadic));
+											&use_variadic,
+											context->special_exprkind));
 	nargs = 0;
 	foreach(l, expr->args)
 	{
@@ -7820,7 +7948,8 @@ get_agg_expr(Aggref *aggref, deparse_context *context)
 					 generate_function_name(aggref->aggfnoid, nargs,
 											NIL, argtypes,
 											aggref->aggvariadic,
-											&use_variadic),
+											&use_variadic,
+											context->special_exprkind),
 					 (aggref->aggdistinct != NIL) ? "DISTINCT " : "");
 
 	if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
@@ -7910,7 +8039,8 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
 	appendStringInfo(buf, "%s(",
 					 generate_function_name(wfunc->winfnoid, nargs,
 											argnames, argtypes,
-											false, NULL));
+											false, NULL,
+											context->special_exprkind));
 	/* winstar can be set only in zero-argument aggregates */
 	if (wfunc->winstar)
 		appendStringInfoChar(buf, '*');
@@ -9147,7 +9277,8 @@ generate_relation_name(Oid relid, List *namespaces)
  */
 static char *
 generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p)
+					   bool has_variadic, bool *use_variadic_p,
+					   ParseExprKind special_exprkind)
 {
 	char	   *result;
 	HeapTuple	proctup;
@@ -9162,6 +9293,7 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	int			p_nvargs;
 	Oid			p_vatype;
 	Oid		   *p_true_typeids;
+	bool		force_qualify = false;
 
 	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
 	if (!HeapTupleIsValid(proctup))
@@ -9170,6 +9302,17 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	proname = NameStr(procform->proname);
 
 	/*
+	 * Thanks to parser hacks to avoid needing to reserve CUBE, we
+	 * need to force qualification in some special cases.
+	 */
+
+	if (special_exprkind == EXPR_KIND_GROUP_BY)
+	{
+		if (strcmp(proname, "cube") == 0 || strcmp(proname, "rollup") == 0)
+			force_qualify = true;
+	}
+
+	/*
 	 * Determine whether VARIADIC should be printed.  We must do this first
 	 * since it affects the lookup rules in func_get_detail().
 	 *
@@ -9200,14 +9343,23 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	/*
 	 * The idea here is to schema-qualify only if the parser would fail to
 	 * resolve the correct function given the unqualified func name with the
-	 * specified argtypes and VARIADIC flag.
+	 * specified argtypes and VARIADIC flag.  But if we already decided to
+	 * force qualification, then we can skip the lookup and pretend we didn't
+	 * find it.
 	 */
-	p_result = func_get_detail(list_make1(makeString(proname)),
-							   NIL, argnames, nargs, argtypes,
-							   !use_variadic, true,
-							   &p_funcid, &p_rettype,
-							   &p_retset, &p_nvargs, &p_vatype,
-							   &p_true_typeids, NULL);
+	if (!force_qualify)
+		p_result = func_get_detail(list_make1(makeString(proname)),
+								   NIL, argnames, nargs, argtypes,
+								   !use_variadic, true,
+								   &p_funcid, &p_rettype,
+								   &p_retset, &p_nvargs, &p_vatype,
+								   &p_true_typeids, NULL);
+	else
+	{
+		p_result = FUNCDETAIL_NOTFOUND;
+		p_funcid = InvalidOid;
+	}
+
 	if ((p_result == FUNCDETAIL_NORMAL ||
 		 p_result == FUNCDETAIL_AGGREGATE ||
 		 p_result == FUNCDETAIL_WINDOWFUNC) &&
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 4dd3f9f..83030cb 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *  pgset - NULL, or a List** pointing to a grouping set to filter the
+ *      groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index c9f7223..4df44d0 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -83,6 +83,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 59b17f3..4dff20b 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -130,6 +130,8 @@ typedef struct ExprContext
 	Datum	   *ecxt_aggvalues; /* precomputed values for aggs/windowfuncs */
 	bool	   *ecxt_aggnulls;	/* null flags for aggs/windowfuncs */
 
+	Bitmapset  *grouped_cols;   /* which columns exist in current grouping set */
+
 	/* Value to substitute for CaseTestExpr nodes in expression */
 	Datum		caseValue_datum;
 	bool		caseValue_isNull;
@@ -407,6 +409,11 @@ typedef struct EState
 	HeapTuple  *es_epqTuple;	/* array of EPQ substitute tuples */
 	bool	   *es_epqTupleSet; /* true if EPQ tuple is provided */
 	bool	   *es_epqScanDone; /* true if EPQ tuple has been fetched */
+
+	/*
+	 * This is for linking chained aggregate nodes
+	 */
+	struct AggState	   *agg_chain_head;
 } EState;
 
 
@@ -595,6 +602,21 @@ typedef struct AggrefExprState
 } AggrefExprState;
 
 /* ----------------
+ *		GroupingFuncExprState node
+ *
+ * The list of column numbers refers to the input tuples of the Agg node to
+ * which the GroupingFunc belongs, and may contain 0 for references to columns
+ * that are only present in grouping sets processed by different Agg nodes (and
+ * which are therefore always considered "grouping" here).
+ * ----------------
+ */
+typedef struct GroupingFuncExprState
+{
+	ExprState	xprstate;
+	List	   *clauses;		/* integer list of column numbers */
+} GroupingFuncExprState;
+
+/* ----------------
  *		WindowFuncExprState node
  * ----------------
  */
@@ -1743,19 +1765,27 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerGroupingSetData *AggStatePerGroupingSet;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
+	int			numsets;		/* number of grouping sets (or 0) */
 	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontexts;	/* econtexts for long-lived data (per GS) */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	bool		chain_done;		/* indicates completion of chained fetch */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	int        *gset_lengths;	/* lengths of grouping sets */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
@@ -1765,6 +1795,12 @@ typedef struct AggState
 	List	   *hash_needed;	/* list of columns needed in hash table */
 	bool		table_filled;	/* hash table filled yet? */
 	TupleHashIterator hashiter; /* for iterating through hash table */
+	int			chain_depth;	/* number of chained child nodes */
+	int			chain_rescan;	/* rescan indicator */
+	int			chain_eflags;	/* saved eflags for rewind optimization */
+	bool		chain_top;		/* true for the "top" node in a chain */
+	struct AggState	*chain_head;
+	Tuplestorestate *chain_tuplestore;
 } AggState;
 
 /* ----------------
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 4dff6a0..01d9fed 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 38469ef..a5a2cce 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -131,9 +131,11 @@ typedef enum NodeTag
 	T_RangeVar,
 	T_Expr,
 	T_Var,
+	T_GroupedVar,
 	T_Const,
 	T_Param,
 	T_Aggref,
+	T_GroupingFunc,
 	T_WindowFunc,
 	T_ArrayRef,
 	T_FuncExpr,
@@ -184,6 +186,7 @@ typedef enum NodeTag
 	T_GenericExprState,
 	T_WholeRowVarExprState,
 	T_AggrefExprState,
+	T_GroupingFuncExprState,
 	T_WindowFuncExprState,
 	T_ArrayRefExprState,
 	T_FuncExprState,
@@ -401,6 +404,7 @@ typedef enum NodeTag
 	T_RangeTblFunction,
 	T_WithCheckOption,
 	T_SortGroupClause,
+	T_GroupingSet,
 	T_WindowClause,
 	T_PrivGrantee,
 	T_FuncWithArgs,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 38ed661..74aed2a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -136,6 +136,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of GroupingSet's if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
@@ -960,6 +962,73 @@ typedef struct SortGroupClause
 } SortGroupClause;
 
 /*
+ * GroupingSet -
+ *		representation of CUBE, ROLLUP and GROUPING SETS clauses
+ *
+ * In a Query with grouping sets, the groupClause contains a flat list of
+ * SortGroupClause nodes for each distinct expression used.  The actual
+ * structure of the GROUP BY clause is given by the groupingSets tree.
+ *
+ * In the raw parser output, GroupingSet nodes (of all types except SIMPLE
+ * which is not used) are potentially mixed in with the expressions in the
+ * groupClause of the SelectStmt.  (An expression can't contain a GroupingSet,
+ * but a list may mix GroupingSet and expression nodes.)  At this stage, the
+ * content of each node is a list of expressions, some of which may be RowExprs
+ * which represent sublists rather than actual row constructors, and nested
+ * GroupingSet nodes where legal in the grammar.  The structure directly
+ * reflects the query syntax.
+ *
+ * In parse analysis, the transformed expressions are used to build the tlist
+ * and groupClause list (of SortGroupClause nodes), and the groupingSets tree
+ * is eventually reduced to a fixed format:
+ *
+ * EMPTY nodes represent (), and obviously have no content
+ *
+ * SIMPLE nodes represent a list of one or more expressions to be treated as an
+ * atom by the enclosing structure; the content is an integer list of
+ * ressortgroupref values (see SortGroupClause)
+ *
+ * CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
+ *
+ * SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
+ * parse analysis they cannot contain more SETS nodes; enough of the syntactic
+ * transforms of the spec have been applied that we no longer have arbitrarily
+ * deep nesting (though we still preserve the use of cube/rollup).
+ *
+ * Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
+ * nodes at the leaves), then the groupClause will be empty, but this is still
+ * an aggregation query (similar to using aggs or HAVING without GROUP BY).
+ *
+ * As an example, the following clause:
+ *
+ * GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
+ *
+ * looks like this after raw parsing:
+ *
+ * SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
+ *
+ * and parse analysis converts it to:
+ *
+ * SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
+ */
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	NodeTag		type;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
+/*
  * WindowClause -
  *		transformed representation of WINDOW and OVER clauses
  *
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index a175000..729456d 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f6683f0..a61b11f 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -656,6 +656,7 @@ typedef enum AggStrategy
 {
 	AGG_PLAIN,					/* simple agg across all input rows */
 	AGG_SORTED,					/* grouped agg, input must be sorted */
+	AGG_CHAINED,				/* chained agg, input must be sorted */
 	AGG_HASHED					/* grouped agg, use internal hashtable */
 } AggStrategy;
 
@@ -663,10 +664,12 @@ typedef struct Agg
 {
 	Plan		plan;
 	AggStrategy aggstrategy;
+	int			chain_depth;	/* number of associated ChainAggs in tree */
 	int			numCols;		/* number of grouping columns */
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 4f1d234..41fe778 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -160,6 +160,22 @@ typedef struct Var
 } Var;
 
 /*
+ * GroupedVar - expression node representing a variable that might be
+ * involved in a grouping set.
+ *
+ * This is identical to Var node except in execution; when evaluated it
+ * is conditionally NULL depending on the active grouping set.  Vars are
+ * converted to GroupedVars (if needed) only late in planning.
+ *
+ * (Because they appear only late in planning, most code that handles Vars
+ * doesn't need to know about these, either because they don't exist yet or
+ * because optimizations specific to Vars are intentionally not applied to
+ * GroupedVars.)
+ */
+
+typedef Var GroupedVar;
+
+/*
  * Const
  */
 typedef struct Const
@@ -268,6 +284,41 @@ typedef struct Aggref
 } Aggref;
 
 /*
+ * GroupingFunc
+ *
+ * A GroupingFunc is a GROUPING(...) expression, which behaves in many ways
+ * like an aggregate function (e.g. it "belongs" to a specific query level,
+ * which might not be the one immediately containing it), but also differs in
+ * an important respect: it never evaluates its arguments, they merely
+ * designate expressions from the GROUP BY clause of the query level to which
+ * it belongs.
+ *
+ * The spec defines the evaluation of GROUPING() purely by syntactic
+ * replacement, but we make it a real expression for optimization purposes so
+ * that one Agg node can handle multiple grouping sets at once.  Evaluating the
+ * result only needs the column positions to check against the grouping set
+ * being projected.  However, for EXPLAIN to produce meaningful output, we have
+ * to keep the original expressions around, since expression deparse does not
+ * give us any feasible way to get at the GROUP BY clause.
+ *
+ * Also, we treat two GroupingFunc nodes as equal if they have equal arguments
+ * lists and agglevelsup, without comparing the refs and cols annotations.
+ *
+ * In raw parse output we have only the args list; parse analysis fills in the
+ * refs list, and the planner fills in the cols list.
+ */
+typedef struct GroupingFunc
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+	int			location;		/* token location */
+} GroupingFunc;
+
+/*
  * WindowFunc
  */
 typedef struct WindowFunc
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 334cf51..9760a14 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -260,6 +260,11 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupedVar fixup in setrefs */
+	AttrNumber *groupColIdx;
+	/* for GroupingFunc fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index fa72918..47cef55 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -58,6 +58,8 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
+		 int *chain_depth_p,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 3dc8bab..b0f0f19 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 7c243ec..0e4b719 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -98,6 +98,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -173,6 +174,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -324,6 +326,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -342,6 +345,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 91a0706..6a5f9bb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingFunc(ParseState *pstate, GroupingFunc *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index 6a4438f..fdf6732 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index bf69f2a..fdca713 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..fbfb424
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,575 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+ grouping | a |  array_agg  | rank | rank 
+----------+---+-------------+------+------
+        0 | 1 | {1,4,5}     |    1 |    1
+        0 | 3 | {1,2}       |    3 |    3
+        1 |   | {1,4,5,1,2} |    1 |    6
+(3 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  Arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 6d3b865..d2ed619 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -83,7 +83,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address
+test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 8326894..fc132b1 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -85,6 +85,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..aebcbbb
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,153 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
+-- end
#112Svenne Krap
svenne@krap.dk
In reply to: Andrew Gierth (#111)
Re: WIP Patch for GROUPING SETS phase 1

The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: tested, passed

This is a midway review, a later will complete it.

The patch applies against 8d1f239003d0245dda636dfa6cf0add13bee69d6 and builds correctly. Make installcheck-world fails, but it seems to be somewhere totally unrelated (TCL pl)...

The documentation is very well-written and the patch implements the documented syntax.

I still need to check against the standard and I will run it against a non-trivival production load... hopefully I will finish up my review shortly after the weekend...

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#113Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Svenne Krap (#112)
Re: WIP Patch for GROUPING SETS phase 1

"Svenne" == Svenne Krap <svenne@krap.dk> writes:

Svenne> I still need to check against the standard and I will run it
Svenne> against a non-trivival production load... hopefully I will
Svenne> finish up my review shortly after the weekend...

Thanks for the review so far; any progress? I'm quite interested in
collecting samples of realistic grouping sets queries and their
performance, for use in possible further optimization work. (I don't
need full data or anything like that, just "this query ran in x seconds
on N million rows, which is fast enough/not fast enough/too slow to be
any use")

Let me know if there's anything you need...

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#114Svenne Krap
svenne.lists@krap.dk
In reply to: Svenne Krap (#112)
Re: WIP Patch for GROUPING SETS phase 1

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 18-03-2015 17:18, Svenne Krap wrote:

I still need to check against the standard and I will run it against a

non-trivival production load... hopefully I will finish up my review
shortly after the weekend...

I am still on it, but a little delayed. I hope to get it done this weekend.

Svenne

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#115Svenne Krap
svenne@krap.dk
In reply to: Svenne Krap (#114)
Re: WIP Patch for GROUPING SETS phase 1

The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: tested, passed

Hi,

I have (finally) found time to review this.

The syntax is as per spec as I can see, and the queries I have tested have all produced the correct output.

The documentation looks good and is clear.

I think it is spec compliant, but I am not used enough to the spec to be sure. Also I have not understood the function of <set quantifier> (DISTINCT,ALL) part in the group by clause (and hence not tested it). Hence I haven't marked the spec compliant part.

The installcheck-world fails, but in src/pl/tcl/results/pltcl_queries.out (a sorting problem when looking at the diff) which should be unrelated to GSP. I don't know enough of the check to know if it has already run the GSP tests..

I have also been running a few tests on some real data. This is run on my laptop with 32 GB of memory and a fast SSD.

The first dataset is a join between a data table of 472 MB (4,3 Mrows) and a tiny multi-column lookup table. I am returning a count(*).
Here the data is hierarchical so CUBE does not make sense. GROUPING SETS and ROLLUP both works fine and if work_buffers are large enough it slightly beats the handwritten "union all" equivalent (runtimes as 7,6 seconds to 7,7 seconds). If work_buffers are the default 4MB the union-all-equivalent (UAE) beats the GS-query almost 2:1 due to disk spill (14,3 (GS) vs. 8,2 (UAE) seconds).

The other query is on the same datatable as before, but with three "columns" (two calculated and one natural) for a cube. I am returning a count(*).
First column is "extract year from date column"
Second column is "divide a value by something and truncate" (i.e. make buckets)
Third column is a litteral integer column.
Here the GS-version is slightly slower than the UAE-version (17,5 vs. 14,2). Nothing obvious about why in the explain (analyze,buffers,costs,timing) .

I have the explains, but as the dataset is semi-private and I don't have any easy way to edit out names in it, I will send it on request (non-disclosure from the recipient is of course a must) and not post it on the list.

I think the feature is ready to be commited, but am unsure whether I am qualified to gauge that :)

/Svenne

The new status of this patch is: Ready for Committer

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#116Svenne Krap
svenne.lists@krap.dk
In reply to: Svenne Krap (#115)
Re: WIP Patch for GROUPING SETS phase 1

Oh, and I build it on top of f92fc4c95ddcc25978354a8248d3df22269201bc

On 20-04-2015 10:36, Svenne Krap wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: tested, passed

Hi,

I have (finally) found time to review this.

The syntax is as per spec as I can see, and the queries I have tested have all produced the correct output.

The documentation looks good and is clear.

I think it is spec compliant, but I am not used enough to the spec to be sure. Also I have not understood the function of <set quantifier> (DISTINCT,ALL) part in the group by clause (and hence not tested it). Hence I haven't marked the spec compliant part.

The installcheck-world fails, but in src/pl/tcl/results/pltcl_queries.out (a sorting problem when looking at the diff) which should be unrelated to GSP. I don't know enough of the check to know if it has already run the GSP tests..

I have also been running a few tests on some real data. This is run on my laptop with 32 GB of memory and a fast SSD.

The first dataset is a join between a data table of 472 MB (4,3 Mrows) and a tiny multi-column lookup table. I am returning a count(*).
Here the data is hierarchical so CUBE does not make sense. GROUPING SETS and ROLLUP both works fine and if work_buffers are large enough it slightly beats the handwritten "union all" equivalent (runtimes as 7,6 seconds to 7,7 seconds). If work_buffers are the default 4MB the union-all-equivalent (UAE) beats the GS-query almost 2:1 due to disk spill (14,3 (GS) vs. 8,2 (UAE) seconds).

The other query is on the same datatable as before, but with three "columns" (two calculated and one natural) for a cube. I am returning a count(*).
First column is "extract year from date column"
Second column is "divide a value by something and truncate" (i.e. make buckets)
Third column is a litteral integer column.
Here the GS-version is slightly slower than the UAE-version (17,5 vs. 14,2). Nothing obvious about why in the explain (analyze,buffers,costs,timing) .

I have the explains, but as the dataset is semi-private and I don't have any easy way to edit out names in it, I will send it on request (non-disclosure from the recipient is of course a must) and not post it on the list.

I think the feature is ready to be commited, but am unsure whether I am qualified to gauge that :)

/Svenne

The new status of this patch is: Ready for Committer

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#117Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Svenne Krap (#115)
Re: WIP Patch for GROUPING SETS phase 1

"Svenne" == Svenne Krap <svenne@krap.dk> writes:

Svenne> I have the explains,

Can you post the explain analyze outputs?

If need be, you can anonymize the table and column names and any
identifiers by using the anonymization option of explain.depesz.com, but
please only do that if you actually need to.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#118Andres Freund
andres@anarazel.de
In reply to: Andrew Gierth (#111)
Re: Final Patch for GROUPING SETS

Hi,

This is not a real review. I'm just scanning through the patch, without
reading the thread, to understand if I see something "worthy" of
controversy. While scanning I might have a couple observations or
questions.

On 2015-03-13 15:46:15 +0000, Andrew Gierth wrote:

+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.

Found "initial subset" not very clear, even if I probably guessed the
right meaning.

+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.

This paragraph deserves to be expanded imo.

+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.

What does reseting 'preemtively' mean?

+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.

Hm. I've seen a bunch of aggreates do this.

+ * TODO: AGG_HASHED doesn't support multiple grouping sets yet.

Are you intending to resolve this before an eventual commit? Possibly
after the 'structural' issues are resolved? Or do you think this can
safely be put of for another release?

@@ -534,11 +603,13 @@ static void
advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
{
int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->numsets, 1);
+	int         numAggs = aggstate->numaggs;
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
{
AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
ExprState  *filter = peraggstate->aggrefstate->aggfilter;
int			numTransInputs = peraggstate->numTransInputs;
int			i;
@@ -582,13 +653,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
continue;
}
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
}

Hm. So a normal GROUP BY is just a subcase of grouping sets. Seems to
make sense, but worthwhile to mention somewhere in the intro.

+	if (!node->agg_done)
+	{
+		/* Dispatch based on strategy */
+		switch (((Agg *) node->ss.ps.plan)->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			case AGG_CHAINED:
+				result = agg_retrieve_chained(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
+	}

Maybe it's just me, but I get twitchy if I see a default being used like
this. I'd much, much rather see the two remaining AGG_* types and get a
warning from the compiler if a new one is added.

+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
+		 */
+		if (aggstate->input_done
+			|| (node->aggstrategy == AGG_SORTED
+				&& aggstate->projected_set != -1
+				&& aggstate->projected_set < (numGroupingSets - 1)
+				&& nextSetSize > 0
+				&& !execTuplesMatch(econtext->ecxt_outertuple,
+									tmpcontext->ecxt_outertuple,
+									nextSetSize,
+									node->grpColIdx,
+									aggstate->eqfunctions,
+									tmpcontext->ecxt_per_tuple_memory)))

I'll bet this will look absolutely horrid after a pgindent run :/

+/*
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a poset into chains (which is what we
+ * need, taking the list of grouping sets as a poset ordered by set inclusion)
+ * can be mapped to the problem of finding the maximum cardinality matching on
+ * a bipartite graph, which is solvable in polynomial time with a worst case of
+ * no worse than O(n^2.5) and usually much better. Since our N is at most 4096,
+ * we don't need to consider fallbacks to heuristic or approximate methods.
+ * (Planning time for a 12-d cube is under half a second on my modest system
+ * even with optimization off and assertions on.)

I think using the long form of poset once would be a good thing.

+ * We use the Hopcroft-Karp algorithm for the graph matching; it seems to work
+ * well enough for our purposes.  This implementation is based on pseudocode
+ * found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * This implementation uses the same indices for elements of U and V (the two
+ * halves of the graph) because in our case they are always the same size, and
+ * we always know whether an index represents a u or a v. Index 0 is reserved
+ * for the NIL node.
+ */
+
+struct hk_state
+{
+	int			graph_size;		/* size of half the graph plus NIL node */
+	int			matching;
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+};

I wonder if it'd not be better to put this in a separate file. Most
readers just won't care about this bit and the file is long enough.

-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
!parse->hasWindowFuncs)
{

Looks like well above 80 lines.

%nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
%left		Op OPERATOR		/* multi-character ops and user-defined operators */
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+

This is somewhat icky. I've not really thought abuot this very much, but
it's imo something to pay attention to.

So, having quickly scanned through the patch, do I understand correctly
that the contentious problems are:

* Arguably this converts the execution *tree* into a DAG. Tom seems to
be rather uncomfortable with that. I am wondering whether this really
is a big deal - essentially this only happens in a relatively
'isolated' part of the tree right? I.e. if those chained together
nodes were considered one node, there would not be any loops?
Additionally, the way parametrized scans works already essentially
"violates" the tree paradigma somewhat.
There still might be better representations, about which I want to
think, don't get me wrong. I'm "just" not seing this as a fundamental
problem.
* The whole grammar/keyword issue. To me this seems to be a problem of
swallowing one out of several similarly coloured poisonous
pills. Which we can't really avoid, i.e. this isn't really this
patches fault that we have to make them.

Are those the two bigger controversial areas? Or are there others in
your respective views?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#119Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andres Freund (#118)
Re: Final Patch for GROUPING SETS

"Andres" == Andres Freund <andres@anarazel.de> writes:

Andres> This is not a real review. I'm just scanning through the
Andres> patch, without reading the thread, to understand if I see
Andres> something "worthy" of controversy. While scanning I might have
Andres> a couple observations or questions.

+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.

Andres> Found "initial subset" not very clear, even if I probably
Andres> guessed the right meaning.

How about:

* [...], and on group boundaries we reset those states
* (starting at the front of the list) whose grouping values have
* changed (the list of grouping sets is ordered from most specific to
* least specific). One AGG_SORTED node thus handles any number [...]

+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.

Andres> This paragraph deserves to be expanded imo.

OK, but what in particular needs clarifying?

+ *	  In order to avoid excess memory consumption from a chain of alternating
+ *	  Sort and AGG_CHAINED nodes, we reset each child Sort node preemptively,
+ *	  allowing us to cap the memory usage for all the sorts in the chain at
+ *	  twice the usage for a single node.

Andres> What does reseting 'preemtively' mean?

Plan nodes are normally not reset (in the sense of calling ExecReScan)
just because they finished, but rather it's done before a subsequent new
scan is done. Doing the rescan call after all the sorted output has
been read means we discard the data from each sort node as soon as it is
transferred to the next one.

There is a more specific comment in agg_retrieve_chained where this
actually happens.

+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.

Andres> Hm. I've seen a bunch of aggreates do this.

Such as? This was discussed about a year ago in the context of WITHIN
GROUP:

/messages/by-id/87r424i24w.fsf@news-spur.riddles.org.uk

+ * TODO: AGG_HASHED doesn't support multiple grouping sets yet.

Andres> Are you intending to resolve this before an eventual commit?

Original plan was to tackle AGG_HASHED after a working implementation
was committed; we figured that we'd have two commitfests to get the
basics right, and then have a chance to get AGG_HASHED done for the
third one. Also, there was talk of other people working on hashagg
memory usage issues, and we didn't want to conflict with that.

Naturally the extended delays rather put paid to that plan. Going ahead
and writing code for AGG_HASHED anyway wasn't really an option, since
with the overall structural questions unresolved there was too much
chance of it being wasted effort.

Andres> Possibly after the 'structural' issues are resolved? Or do you
Andres> think this can safely be put of for another release?

I think the feature is useful even without AGG_HASHED, even though that
means it can sometimes be beaten on performance by using UNION ALL of
many separate GROUP BYs; but I'd defer to the opinions of others on that
point.

Andres> Maybe it's just me, but I get twitchy if I see a default being
Andres> used like this. I'd much, much rather see the two remaining
Andres> AGG_* types and get a warning from the compiler if a new one is
Andres> added.

Meh. It also needs a bogus initialization of "result" to avoid compiler
warnings if done that way.

Andres> I'll bet this will look absolutely horrid after a pgindent run :/

pgindent doesn't touch it, I just checked.

[making CUBE and ROLLUP work without being reserved]

Andres> This is somewhat icky. I've not really thought abuot this very
Andres> much, but it's imo something to pay attention to.

This one was discussed in December or so - all the arguments were
thrashed out then.

The bottom line is that reserving "cube" is really painful due to
contrib/cube, and of the possible workarounds, using precedence rules to
resolve it in the grammar is already being used for some other
constructs.

Andres> So, having quickly scanned through the patch, do I understand
Andres> correctly that the contentious problems are:

Andres> * Arguably this converts the execution *tree* into a DAG. Tom
Andres> seems to be rather uncomfortable with that. I am wondering
Andres> whether this really is a big deal - essentially this only
Andres> happens in a relatively 'isolated' part of the tree right?
Andres> I.e. if those chained together nodes were considered one node,
Andres> there would not be any loops? Additionally, the way
Andres> parametrized scans works already essentially "violates" the
Andres> tree paradigma somewhat.

The major downsides as I see them with the current approach are:

1. It makes plans (and hence explain output) nest very deeply if you
have complex grouping sets (especially cubes with high dimensionality).

This can make explain very slow in the most extreme cases (explain seems
to perform about O(N^3) in the nesting depth of the plan, I don't know
why). But it's important not to exaggerate this effect: if anyone
actually has a real-world example of a 12-d cube I'll eat the headgear
of their choice, and for an 8-d cube the explain overhead is only on the
order of ~40ms. (A 12-d cube generates more than 35 megs of explain
output, nested about 1850 levels deep, and takes about 45 seconds to
explain, though only about 200ms to plan.)

In more realistic cases, the nesting isn't too bad (4-d cube gives a
12-deep plan: 6 sorts and 6 agg nodes), but it's still somewhat less
readable than a union-based plan would be. But honestly I think that
explain output aesthetics should not be a major determining factor for
implementations.

2. A union-based approach would have a chance of including AGG_HASHED
support without any significant code changes, just by using one HashAgg
node per qualifying grouping set. However, this would be potentially
significantly slower than teaching HashAgg to do multiple grouping sets,
and memory usage would be an issue.

(The current approach is specifically intended to allow the use of an
AGG_HASHED node as the head of the chain, once it has been extended to
support multiple grouping sets.)

Andres> There still might be better representations, about which I want
Andres> to think, don't get me wrong. I'm "just" not seing this as a
Andres> fundamental problem.

The obvious alternative is this:

-> CTE x
-> entire input subplan here
-> Append
-> GroupAggregate
-> Sort
-> CTE Scan x
-> GroupAggregate
-> Sort
-> CTE Scan x
-> HashAggregate
-> CTE Scan x
[...]

which was basically what we expected to do originally. But all of the
existing code to deal with CTE / CTEScan is based on the assumption that
each CTE has a rangetable entry before planning starts, and it is
completely non-obvious how to arrange to generate such CTEs on the fly
while planning. Tom offered in December to look into that aspect for
us, and of course we've heard nothing about it since.

Andres> * The whole grammar/keyword issue. To me this seems to be a
Andres> problem of swallowing one out of several similarly coloured
Andres> poisonous pills.

Right. Which is why this issue was thrashed out months ago and the
current approach decided on. I consider this question closed.

Andres> Are those the two bigger controversial areas? Or are there
Andres> others in your respective views?

Another controversial item was the introduction of GroupedVar. The need
for this can be avoided by explicitly setting to NULL the relevant
columns of the representative group tuple when evaluating result rows,
but (a) I don't think that's an especially clean approach (though I'm
not pushing back very hard on it) and (b) the logic needed in its
absence is different between the current chaining implementation and a
possible union implementation, so I decided against making any changes
on wasted-effort grounds.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#120Noah Misch
noah@leadboat.com
In reply to: Andrew Gierth (#119)
Re: Final Patch for GROUPING SETS

On Thu, Apr 30, 2015 at 05:35:26AM +0100, Andrew Gierth wrote:

"Andres" == Andres Freund <andres@anarazel.de> writes:

+ * TODO: AGG_HASHED doesn't support multiple grouping sets yet.

Andres> Are you intending to resolve this before an eventual commit?

Original plan was to tackle AGG_HASHED after a working implementation
was committed;

+1 for that plan.

Andres> Possibly after the 'structural' issues are resolved? Or do you
Andres> think this can safely be put of for another release?

I think the feature is useful even without AGG_HASHED, even though that
means it can sometimes be beaten on performance by using UNION ALL of
many separate GROUP BYs; but I'd defer to the opinions of others on that
point.

It will be a tough call, and PostgreSQL has gone each way on some recent
feature. I recommend considering both GroupAggregate and HashAggregate in all
design discussion but continuing to work toward a first commit implementing
GroupAggregate alone. With that in the tree, we'll be in a better position to
decide whether to release a feature paused at that stage in its development.
Critical facts are uncertain, so a discussion today would be unproductive.

Andres> So, having quickly scanned through the patch, do I understand
Andres> correctly that the contentious problems are:

Andres> * Arguably this converts the execution *tree* into a DAG. Tom
Andres> seems to be rather uncomfortable with that. I am wondering
Andres> whether this really is a big deal - essentially this only
Andres> happens in a relatively 'isolated' part of the tree right?
Andres> I.e. if those chained together nodes were considered one node,
Andres> there would not be any loops? Additionally, the way
Andres> parametrized scans works already essentially "violates" the
Andres> tree paradigma somewhat.

I agree with your assessment. That has been contentious.

The major downsides as I see them with the current approach are:

1. It makes plans (and hence explain output) nest very deeply if you
have complex grouping sets (especially cubes with high dimensionality).

This can make explain very slow in the most extreme cases

I'm not worried about that. If anything, the response is to modify explain to
more-quickly/compactly present affected plan trees.

2. A union-based approach would have a chance of including AGG_HASHED
support without any significant code changes,

-> CTE x
-> entire input subplan here
-> Append
-> GroupAggregate
-> Sort
-> CTE Scan x
-> GroupAggregate
-> Sort
-> CTE Scan x
-> HashAggregate
-> CTE Scan x
[...]

This uses 50-67% more I/O than the current strategy, which makes it a dead end
from my standpoint. Details:
/messages/by-id/20141221210005.GA1864976@tornado.leadboat.com

Andres> Are those the two bigger controversial areas? Or are there
Andres> others in your respective views?

Another controversial item was the introduction of GroupedVar.

I know of no additional controversies to add to this list.

Thanks,
nm

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#121Andres Freund
andres@anarazel.de
In reply to: Andrew Gierth (#119)
Re: Final Patch for GROUPING SETS

On 2015-04-30 05:35:26 +0100, Andrew Gierth wrote:

"Andres" == Andres Freund <andres@anarazel.de> writes:

Andres> This is not a real review. I'm just scanning through the
Andres> patch, without reading the thread, to understand if I see
Andres> something "worthy" of controversy. While scanning I might have
Andres> a couple observations or questions.

+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset some initial subset
+ *	  of the states (the list of grouping sets is ordered from most specific to
+ *	  least specific).  One AGG_SORTED node thus handles any number of grouping
+ *	  sets as long as they share a sort order.

Andres> Found "initial subset" not very clear, even if I probably
Andres> guessed the right meaning.

How about:

* [...], and on group boundaries we reset those states
* (starting at the front of the list) whose grouping values have
* changed (the list of grouping sets is ordered from most specific to
* least specific). One AGG_SORTED node thus handles any number [...]

sounds good.

+ *	  To handle multiple grouping sets that _don't_ share a sort order, we use
+ *	  a different strategy.  An AGG_CHAINED node receives rows in sorted order
+ *	  and returns them unchanged, but computes transition values for its own
+ *	  list of grouping sets.  At group boundaries, rather than returning the
+ *	  aggregated row (which is incompatible with the input rows), it writes it
+ *	  to a side-channel in the form of a tuplestore.  Thus, a number of
+ *	  AGG_CHAINED nodes are associated with a single AGG_SORTED node (the
+ *	  "chain head"), which creates the side channel and, when it has returned
+ *	  all of its own data, returns the tuples from the tuplestore to its own
+ *	  caller.

Andres> This paragraph deserves to be expanded imo.

OK, but what in particular needs clarifying?

I'm not sure ;). I obviously was a bit tired...

Andres> Are you intending to resolve this before an eventual commit?

...

Andres> Possibly after the 'structural' issues are resolved? Or do you
Andres> think this can safely be put of for another release?

I think the feature is useful even without AGG_HASHED, even though that
means it can sometimes be beaten on performance by using UNION ALL of
many separate GROUP BYs; but I'd defer to the opinions of others on that
point.

I agree.

Andres> * Arguably this converts the execution *tree* into a DAG. Tom
Andres> seems to be rather uncomfortable with that. I am wondering
Andres> whether this really is a big deal - essentially this only
Andres> happens in a relatively 'isolated' part of the tree right?
Andres> I.e. if those chained together nodes were considered one node,
Andres> there would not be any loops? Additionally, the way
Andres> parametrized scans works already essentially "violates" the
Andres> tree paradigma somewhat.

The major downsides as I see them with the current approach are:

1. It makes plans (and hence explain output) nest very deeply if you
have complex grouping sets (especially cubes with high dimensionality).

That doesn't concern me overly much. If we feel the need to fudge the
explain output we certainly can.

2. A union-based approach would have a chance of including AGG_HASHED
support without any significant code changes, just by using one HashAgg
node per qualifying grouping set. However, this would be potentially
significantly slower than teaching HashAgg to do multiple grouping sets,
and memory usage would be an issue.

Your "however" imo pretty much disqualifies that as an argument.

The obvious alternative is this:

-> CTE x
-> entire input subplan here
-> Append
-> GroupAggregate
-> Sort
-> CTE Scan x
-> GroupAggregate
-> Sort
-> CTE Scan x
-> HashAggregate
-> CTE Scan x
[...]

which was basically what we expected to do originally. But all of the
existing code to deal with CTE / CTEScan is based on the assumption that
each CTE has a rangetable entry before planning starts, and it is
completely non-obvious how to arrange to generate such CTEs on the fly
while planning. Tom offered in December to look into that aspect for
us, and of course we've heard nothing about it since.

I find Noah's argument about this kind of structure pretty
convincing. We'd either increase the number of reads, or baloon the
amount of memory if we'd manage to find a structure that'd allow several
of the aggregates to be computed at the same time.

Looking at this some more, I do think the current structure makes sense.
I do think we could flatten this into the toplevel aggregation node, but
the increase in complexity doesn't seem to have corresponding benefits
to me.

Andres> Are those the two bigger controversial areas? Or are there
Andres> others in your respective views?

Another controversial item was the introduction of GroupedVar. The need
for this can be avoided by explicitly setting to NULL the relevant
columns of the representative group tuple when evaluating result rows,
but (a) I don't think that's an especially clean approach (though I'm
not pushing back very hard on it) and (b) the logic needed in its
absence is different between the current chaining implementation and a
possible union implementation, so I decided against making any changes
on wasted-effort grounds.

Seems like fairly minor point to me. I very tentatively lean towards
setting the columns in the group tuple to NULL.

I've rebased the patch to
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=summary
branch int/grouping_sets . There were some minor conflicts.

What I dislike so far:
* Minor formatting things. Just going to fix and push the ones I
dislike.
* The Hopcroft-Karp stuff not being separate
* The increased complexity of grouping_planner. It'd imo be good if some
of that could be refactored into a separate function. Specifically the
else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
block.
* I think it'd not hurt to add rule deparse check for the function in
GROUPING SETS case. I didn't see one at least.

Do you have some nicer demo data set you worked with during development?

FWIW, expensive explains seem to be in stack traces like:

#25 0x00000000008a0ebe in get_variable (var=0x2326d90, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#26 0x00000000008a2ff6 in get_rule_expr (node=0x2326d90, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#27 0x00000000008a0ebe in get_variable (var=0x2326338, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#28 0x00000000008a2ff6 in get_rule_expr (node=0x2326338, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#29 0x00000000008a0ebe in get_variable (var=0x23258b0, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#30 0x00000000008a2ff6 in get_rule_expr (node=0x23258b0, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#31 0x00000000008a0ebe in get_variable (var=0x2324e58, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#32 0x00000000008a2ff6 in get_rule_expr (node=0x2324e58, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#33 0x00000000008a0ebe in get_variable (var=0x2324400, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#34 0x00000000008a2ff6 in get_rule_expr (node=0x2324400, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#35 0x00000000008a0ebe in get_variable (var=0x2323978, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:5813
#36 0x00000000008a2ff6 in get_rule_expr (node=0x2323978, context=0x7ffcd4b386a0, showimplicit=1 '\001')
at /home/andres/src/postgresql/src/backend/utils/adt/ruleutils.c:6933
#37 0x00000000008a0ebe in get_variable (var=0x2322f20, levelsup=0, istoplevel=0 '\000', context=0x7ffcd4b386a0)

- several thousand frames deep. Something seems off here. It's all
below show_grouping_set_keys(), which in turn is below a couple
ExplainNode()s.

I think the problem is "just" that for each variable, in each grouping
set - a very large number in a large cube - we're recursing through the
whole ChainAggregate tree, as each Var just points to a var one level
lower.

It might be worthwhile to add a little hack that deparses the variables
agains the "lowest" relevant node (i.e. the one below the last chain
agg). Since they'll all have the same targetlist that ought to be safe.

I'll look some more, but dinner is calling now.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#122Andres Freund
andres@2ndquadrant.com
In reply to: Andres Freund (#121)
Re: Final Patch for GROUPING SETS

I think the problem is "just" that for each variable, in each grouping
set - a very large number in a large cube - we're recursing through the
whole ChainAggregate tree, as each Var just points to a var one level
lower.

For small values of very large, that is. Had a little thinko there. Its still fault of recursing down all these levels, doing nontrivial work each time.

--
Please excuse brevity and formatting - I am writing this on my mobile phone.

Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#123Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#121)
Re: Final Patch for GROUPING SETS

On 2015-05-12 05:36:19 +0200, Andres Freund wrote:

What I dislike so far:
* Minor formatting things. Just going to fix and push the ones I
dislike.
* The Hopcroft-Karp stuff not being separate
* The increased complexity of grouping_planner. It'd imo be good if some
of that could be refactored into a separate function. Specifically the
else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
block.
* I think it'd not hurt to add rule deparse check for the function in
GROUPING SETS case. I didn't see one at least.

* The code in nodeAgg.c isn't pretty in places. Stuff like if
(node->chain_depth > 0) estate->agg_chain_head = save_chain_head;...
Feels like a good bit of cleanup would be possible there.

I think the problem is "just" that for each variable, in each grouping
set - a very large number in a large cube - we're recursing through the
whole ChainAggregate tree, as each Var just points to a var one level
lower.

It might be worthwhile to add a little hack that deparses the variables
agains the "lowest" relevant node (i.e. the one below the last chain
agg). Since they'll all have the same targetlist that ought to be safe.

I've prototype hacked this, and indeed, adding a shortcut from the
intermediate chain nodes to the 'leaf chain node' cuts the explain time
from 11 to 2 seconds on some arbitrary statement. The remaining time is
the equivalent problem in the sort nodes...

I'm not terribly bothered by this. We can relatively easily fix this up
if required.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#124Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#123)
Re: Final Patch for GROUPING SETS

On 2015-05-12 20:40:49 +0200, Andres Freund wrote:

On 2015-05-12 05:36:19 +0200, Andres Freund wrote:

What I dislike so far:
* Minor formatting things. Just going to fix and push the ones I
dislike.
* The Hopcroft-Karp stuff not being separate
* The increased complexity of grouping_planner. It'd imo be good if some
of that could be refactored into a separate function. Specifically the
else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
block.
* I think it'd not hurt to add rule deparse check for the function in
GROUPING SETS case. I didn't see one at least.

* The code in nodeAgg.c isn't pretty in places. Stuff like if
(node->chain_depth > 0) estate->agg_chain_head = save_chain_head;...
Feels like a good bit of cleanup would be possible there.

In the executor I'd further like:
* to split agg_retrieve_direct into a version for grouping sets and one
without. I think that'll be a pretty clear win for clarity.
* to spin out common code between agg_retrieve_direct (in both the
functions its split into), agg_retrieve_hashed and
agg_retrieve_chained. It should e.g. be fairly simple to spin out the
tail end processing of a input group (finalize_aggregate loop,
ExecQual) into a separate function.

Andrew, are you going to be working on any of these?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#125Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andres Freund (#124)
Re: Final Patch for GROUPING SETS

"Andres" == Andres Freund <andres@anarazel.de> writes:

Andres> Andrew, are you going to be working on any of these?

As discussed on IRC, current status is:

* The increased complexity of grouping_planner. It'd imo be good if some
of that could be refactored into a separate function. Specifically the
else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
block.

done and pushed at you

* The Hopcroft-Karp stuff not being separate

done and pushed

Andres> * to split agg_retrieve_direct into a version for grouping sets
Andres> and one without. I think that'll be a pretty clear win for
Andres> clarity.

I don't see how this helps given that the grouping sets version will be
exactly as complex as the current code.

Andres> * to spin out common code between agg_retrieve_direct (in both
Andres> the functions its split into), agg_retrieve_hashed and
Andres> agg_retrieve_chained. It should e.g. be fairly simple to spin
Andres> out the tail end processing of a input group
Andres> (finalize_aggregate loop, ExecQual) into a separate function.

This isn't _quite_ as simple as it sounds but I'll have a go.

* The code in nodeAgg.c isn't pretty in places. Stuff like if
(node->chain_depth > 0) estate->agg_chain_head = save_chain_head;...
Feels like a good bit of cleanup would be possible there.

I'll look.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#126Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#121)
Re: Final Patch for GROUPING SETS

On 2015-05-12 05:36:19 +0200, Andres Freund wrote:

Another controversial item was the introduction of GroupedVar. The need
for this can be avoided by explicitly setting to NULL the relevant
columns of the representative group tuple when evaluating result rows,
but (a) I don't think that's an especially clean approach (though I'm
not pushing back very hard on it) and (b) the logic needed in its
absence is different between the current chaining implementation and a
possible union implementation, so I decided against making any changes
on wasted-effort grounds.

Seems like fairly minor point to me. I very tentatively lean towards
setting the columns in the group tuple to NULL.

I'm pretty sure by now that I dislike the introduction of GroupedVar,
and not just tentatively. While I can see why you found its
introduction to be nicer than fiddling with the result tuple, for me the
disadvantages seem to outweigh the advantage. For one it's rather wierd
to have Var nodes be changed into GroupedVar in setrefs.c. The number
of places that need to be touched even when it's a 'planned stmt only'
type of node is still pretty large.

Andrew: I'll work on changing this in a couple hours unless you're
speaking up about doing it yourself.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#127Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#126)
Re: Final Patch for GROUPING SETS

On 2015-05-13 22:51:15 +0200, Andres Freund wrote:

I'm pretty sure by now that I dislike the introduction of GroupedVar,
and not just tentatively. While I can see why you found its
introduction to be nicer than fiddling with the result tuple, for me the
disadvantages seem to outweigh the advantage. For one it's rather wierd
to have Var nodes be changed into GroupedVar in setrefs.c. The number
of places that need to be touched even when it's a 'planned stmt only'
type of node is still pretty large.

Andrew: I'll work on changing this in a couple hours unless you're
speaking up about doing it yourself.

I did a stab at removing it, and it imo definitely ends up looking
better.

The code for the GroupedVar replacement isn't perfect yet, but I think
it'd be possible to clean that up until Friday. Unfortunately, after
prolonged staring out of the window, I came to the conclusion that I
don't think the current tree structure isn't right.

I still believe that the general approach of chaining vs. a union or CTE
is correct due to the efficiency arguments upthread. My problem is
that, unless I very much misunderstand something, the current
implementation can end up requiring roughly #sets * #input of additional
space for the "sidechannel tuplestore" in some bad cases. That happens
if you group by a couple clauses that each lead to a high number of
groups.

That happens because the aggregated rows produced in the chain nodes
can't be returned up-tree, because the the next chain (or final group
aggregate) node will expect unaggregated tuples. The current solution
for that is to move the aggregated rows produced in chain nodes into a
tuplestore that's then drained when the top level aggregate node has
done it's own job.

While that's probably not too bad in many cases because most of the use
cases aggregation will be relatively effective, it does seem to be
further evidence that the sidechannel tuplestore isn't the perfect idea.

What I think we should/need to do instead is to the chaining locally
inside one aggregation node. That way the aggregated tuples can be
returned directly, without the sidechannel. While that will require
inlining part of the code from nodeSort.c it doesn't seem too bad.

Besides the advantage of getting rid of that tuplestore, it'll also fix
the explain performance problems (as there's no deep tree to traverse
via ruleutils.c), get rid of the the preemtive ExecReScan() to control
memory usage. I think it might also make combined hashing/sorting
easier.

A rough sketch of what I'm thinking of is:
ExecAgg()
{
...
while (!aggstate->consumed_input)
{
outerslot = ExecProcNode(outerPlanState(aggstate));

if (TupIsNull(outerslot))
{
consumed_input = true;
break;
}

if (aggstate->doing_hashing)
{
entry = lookup_hash_entry(aggstate, outerslot);

/* Advance the aggregates */
advance_aggregates(aggstate, entry->pergroup);
}

if (aggstate->presorted_input || AGG_PLAIN)
{
/* handle aggregation, return if done with group */
}

if (aggstate->doing_chaining)
{
tuplesort_puttupleslot(tuplesortstate, slot);
}
}

if (aggstate->doing_hashing && !done)
agg_retrieve_hashed();

/*
* Feed data from one sort to the next, to compute grouping sets that
* need differing sort orders.
*/
last_sort = tuplesortstate[0];
current_sort = numGroupingSets > 0 ? tuplesortstate[1] : NULL;

while (aggstate->doing_chaining && !done_sorting)
{
tuplesort_gettupleslot(last_sort, tmpslot);

/* exhausted all tuple from a particular sort order, move on */
if (TupIsNull(tmpslot))
{
finalize_aggregates(...);
tuplesort_end(last_sort); /* maybe save stats somewhere? */
last_sort = current_sort;
current_sort = tuplesortstate[...];
if (all_sorts_done)
done_sorting = true;

return aggregated;
}

if (current_sort != NULL)
tuplesort_puttupleslot(current_sort, slot);

/* check if we crossed a boundary */
if (!execTuplesMatch(...))
{
finalize_aggregates(...);
aggstate->grp_firstTuple = ...
return aggregated;
}

advance_aggregates();
tuplesort_puttupleslot(current_sort, slot);
}
}

I think this is quite doable and seems likely to actually end up with
easier to understand code. But unfortunately it seems to be big enough
of a change to make it unlikely to be done in sufficient quality until
the freeze. I'll nonetheless work a couple hours on it tomorrow.

Andrew, is that a structure you could live with, or not?

Others, what do you think?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#128Noah Misch
noah@leadboat.com
In reply to: Andres Freund (#127)
Re: Final Patch for GROUPING SETS

On Thu, May 14, 2015 at 07:50:31AM +0200, Andres Freund wrote:

I still believe that the general approach of chaining vs. a union or CTE
is correct due to the efficiency arguments upthread. My problem is
that, unless I very much misunderstand something, the current
implementation can end up requiring roughly #sets * #input of additional
space for the "sidechannel tuplestore" in some bad cases. That happens
if you group by a couple clauses that each lead to a high number of
groups.

Correct.

Andrew, is that a structure you could live with, or not?

Others, what do you think?

Andrew and I discussed that very structure upthread:

/messages/by-id/20141231085845.GA2148306@tornado.leadboat.com
/messages/by-id/87d26zd9k8.fsf@news-spur.riddles.org.uk
/messages/by-id/20141231210553.GB2159277@tornado.leadboat.com

I still believe the words I wrote in my two messages cited.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#129Andres Freund
andres@anarazel.de
In reply to: Noah Misch (#128)
Re: Final Patch for GROUPING SETS

On 2015-05-14 02:32:04 -0400, Noah Misch wrote:

On Thu, May 14, 2015 at 07:50:31AM +0200, Andres Freund wrote:

Andrew, is that a structure you could live with, or not?

Others, what do you think?

Andrew and I discussed that very structure upthread:

/messages/by-id/87d26zd9k8.fsf@news-spur.riddles.org.uk

I don't really believe that that'd necesarily be true. I think if done
like I sketched it'll likely end up being simpler than the currently
proposed code. I also don't see why this would make combining hashing
and sorting any more complex than now. If anything the contrary.

/messages/by-id/20141231085845.GA2148306@tornado.leadboat.com
/messages/by-id/20141231210553.GB2159277@tornado.leadboat.com

I still believe the words I wrote in my two messages cited.

I.e. that you think it's a sane approach, despite the criticism?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#130Noah Misch
noah@leadboat.com
In reply to: Andres Freund (#129)
Re: Final Patch for GROUPING SETS

On Thu, May 14, 2015 at 08:38:07AM +0200, Andres Freund wrote:

On 2015-05-14 02:32:04 -0400, Noah Misch wrote:

On Thu, May 14, 2015 at 07:50:31AM +0200, Andres Freund wrote:

Andrew, is that a structure you could live with, or not?

Others, what do you think?

Andrew and I discussed that very structure upthread:

/messages/by-id/87d26zd9k8.fsf@news-spur.riddles.org.uk

I don't really believe that that'd necesarily be true. I think if done
like I sketched it'll likely end up being simpler than the currently
proposed code. I also don't see why this would make combining hashing
and sorting any more complex than now. If anything the contrary.

/messages/by-id/20141231085845.GA2148306@tornado.leadboat.com
/messages/by-id/20141231210553.GB2159277@tornado.leadboat.com

I still believe the words I wrote in my two messages cited.

I.e. that you think it's a sane approach, despite the criticism?

Yes. I won't warrant that it proves better, but it looks promising. Covering
hash aggregation might entail a large preparatory refactoring of nodeHash.c,
but beyond development cost I can't malign that.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#131Andres Freund
andres@anarazel.de
In reply to: Noah Misch (#130)
Re: Final Patch for GROUPING SETS

On 2015-05-14 02:51:42 -0400, Noah Misch wrote:

Covering hash aggregation might entail a large preparatory refactoring
of nodeHash.c, but beyond development cost I can't malign that.

You mean execGrouping.c? Afaics nodeHash.c isn't involved, and it
doesn't look very interesting to make it so?

Isn't that just calling BuildTupleHashTable() for each
to-be-hash-aggregated set, and then make agg_fill_hash_table() target
multiple hashtables? This mostly seems to be adding a couple loops and
parameters.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#132Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Andres Freund (#127)
Re: Final Patch for GROUPING SETS

"Andres" == Andres Freund <andres@anarazel.de> writes:

Andres> My problem is that, unless I very much misunderstand something,
Andres> the current implementation can end up requiring roughly #sets *
Andres> #input of additional space for the "sidechannel tuplestore" in
Andres> some bad cases. That happens if you group by a couple clauses
Andres> that each lead to a high number of groups.

The actual upper bound for the tuplestore size is the size of the
_result_ of the grouping, less one or two rows. You get that in cases
like grouping sets (unique_col, rollup(constant_col)), which seems
sufficiently pathological not to be worth worrying about greatly.

In normal cases, the size of the tuplestore is the size of the result
minus the rows processed directly by the top node. So the only way the
size can be an issue is if the result set size itself is also an issue,
and in that case I don't really think that this is going to be a matter
of significant concern.

Andres> A rough sketch of what I'm thinking of is:

I'm not sure I'd do it quite like that. Rather, have a wrapper function
get_outer_tuple that calls ExecProcNode and, if appropriate, writes the
tuple to a tuplesort before returning it; use that in place of
ExecProcNode in agg_retrieve_direct and when building the hash table.

The problem with trying to turn agg_retrieve_direct inside-out (to make
it look more like agg_retrieve_chained) is that it potentially projects
multiple output groups (not just multiple-result projections) from a
single input tuple, so it has to have some control over whether a tuple
is read or not. (agg_retrieve_chained avoids this problem because it can
loop over the projections, since it's writing to the tuplestore rather
than returning to the caller.)

Andres> I think this is quite doable and seems likely to actually end
Andres> up with easier to understand code. But unfortunately it seems
Andres> to be big enough of a change to make it unlikely to be done in
Andres> sufficient quality until the freeze. I'll nonetheless work a
Andres> couple hours on it tomorrow.

Andres> Andrew, is that a structure you could live with, or not?

Well, I still think the opaque-blobless isn't nice, but I retract some
of my previous concerns; I can see a way to do it that doesn't
significantly impinge on the difficulty of adding hash support.

It sounds like I have more time immediately available than you do. As
discussed on IRC, I'll take the first shot, and we'll see how far I can
get.

--
Andrew (irc:RhodiumToad)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#133Andres Freund
andres@anarazel.de
In reply to: Andrew Gierth (#132)
Re: Final Patch for GROUPING SETS

On 2015-05-14 09:16:10 +0100, Andrew Gierth wrote:

Andres> A rough sketch of what I'm thinking of is:

I'm not sure I'd do it quite like that.

It was meant as a sketch, so there's lots of things it's probably
missing ;)

Rather, have a wrapper function get_outer_tuple that calls
ExecProcNode and, if appropriate, writes the tuple to a tuplesort
before returning it; use that in place of ExecProcNode in
agg_retrieve_direct and when building the hash table.

Hm. I'd considered that, but thought it might end up being more complex
for hashing support. I'm not exactly sure why I thought that tho.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#134Noah Misch
noah@leadboat.com
In reply to: Andres Freund (#131)
Re: Final Patch for GROUPING SETS

On Thu, May 14, 2015 at 08:59:45AM +0200, Andres Freund wrote:

On 2015-05-14 02:51:42 -0400, Noah Misch wrote:

Covering hash aggregation might entail a large preparatory refactoring
of nodeHash.c, but beyond development cost I can't malign that.

You mean execGrouping.c? Afaics nodeHash.c isn't involved, and it
doesn't look very interesting to make it so?

That particular comment of mine was comprehensively wrong.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#135Andres Freund
andres@anarazel.de
In reply to: Andrew Gierth (#132)
1 attachment(s)
Re: Final Patch for GROUPING SETS

On 2015-05-14 09:16:10 +0100, Andrew Gierth wrote:

It sounds like I have more time immediately available than you do. As
discussed on IRC, I'll take the first shot, and we'll see how far I can
get.

Andrew (and I) have been working on this since. Here's the updated and
rebased patch.

It misses a decent commit message and another beautification
readthrough. I've spent the last hour going through the thing again and
all I hit was a disturbing number of newline "errors" and two minor
comment additions.

Greetings,

Andres Freund

Attachments:

0001-Support-GROUPING-SETS.patchtext/x-patch; charset=us-asciiDownload
>From ccb3d3d78595634a236a3bbbdb87d85885a61cb4 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sat, 16 May 2015 00:04:19 +0200
Subject: [PATCH] Support GROUPING SETS.

Needs a catversion bump because stored rules may change.
---
 contrib/pg_stat_statements/pg_stat_statements.c |   21 +
 doc/src/sgml/func.sgml                          |   70 +-
 doc/src/sgml/queries.sgml                       |  178 +++
 doc/src/sgml/ref/select.sgml                    |   33 +-
 src/backend/catalog/sql_features.txt            |    6 +-
 src/backend/commands/explain.c                  |  160 ++-
 src/backend/executor/execQual.c                 |   63 ++
 src/backend/executor/execUtils.c                |    5 +-
 src/backend/executor/nodeAgg.c                  | 1311 +++++++++++++++++------
 src/backend/lib/Makefile                        |    3 +-
 src/backend/lib/bipartite_match.c               |  161 +++
 src/backend/nodes/copyfuncs.c                   |   38 +
 src/backend/nodes/equalfuncs.c                  |   32 +
 src/backend/nodes/list.c                        |   26 +
 src/backend/nodes/makefuncs.c                   |   15 +
 src/backend/nodes/nodeFuncs.c                   |   51 +
 src/backend/nodes/outfuncs.c                    |   32 +
 src/backend/nodes/readfuncs.c                   |   37 +
 src/backend/optimizer/path/allpaths.c           |    3 +-
 src/backend/optimizer/path/indxpath.c           |    3 +-
 src/backend/optimizer/plan/analyzejoins.c       |   28 +-
 src/backend/optimizer/plan/createplan.c         |    7 +-
 src/backend/optimizer/plan/planagg.c            |    2 +-
 src/backend/optimizer/plan/planner.c            |  829 ++++++++++++--
 src/backend/optimizer/plan/setrefs.c            |   30 +-
 src/backend/optimizer/plan/subselect.c          |   57 +-
 src/backend/optimizer/prep/prepjointree.c       |    1 +
 src/backend/optimizer/prep/prepunion.c          |    7 +-
 src/backend/optimizer/util/clauses.c            |    1 +
 src/backend/optimizer/util/pathnode.c           |    3 +-
 src/backend/optimizer/util/tlist.c              |   22 +
 src/backend/optimizer/util/var.c                |   24 +
 src/backend/parser/analyze.c                    |    5 +-
 src/backend/parser/gram.y                       |  123 ++-
 src/backend/parser/parse_agg.c                  |  723 ++++++++++++-
 src/backend/parser/parse_clause.c               |  502 ++++++++-
 src/backend/parser/parse_expr.c                 |    5 +
 src/backend/parser/parse_target.c               |    4 +
 src/backend/rewrite/rewriteHandler.c            |    2 +-
 src/backend/rewrite/rewriteManip.c              |   23 +
 src/backend/utils/adt/ruleutils.c               |  217 +++-
 src/backend/utils/adt/selfuncs.c                |   13 +-
 src/include/catalog/catversion.h                |    2 +-
 src/include/commands/explain.h                  |    2 +
 src/include/lib/bipartite_match.h               |   44 +
 src/include/nodes/execnodes.h                   |   35 +-
 src/include/nodes/makefuncs.h                   |    2 +
 src/include/nodes/nodes.h                       |    3 +
 src/include/nodes/parsenodes.h                  |   69 ++
 src/include/nodes/pg_list.h                     |    3 +-
 src/include/nodes/plannodes.h                   |    2 +
 src/include/nodes/primnodes.h                   |   35 +
 src/include/nodes/relation.h                    |    3 +
 src/include/optimizer/planmain.h                |    1 +
 src/include/optimizer/tlist.h                   |    3 +
 src/include/parser/kwlist.h                     |    4 +
 src/include/parser/parse_agg.h                  |    5 +
 src/include/parser/parse_clause.h               |    1 +
 src/include/utils/selfuncs.h                    |    2 +-
 src/test/regress/expected/groupingsets.out      |  590 ++++++++++
 src/test/regress/parallel_schedule              |    2 +-
 src/test/regress/serial_schedule                |    1 +
 src/test/regress/sql/groupingsets.sql           |  165 +++
 63 files changed, 5243 insertions(+), 607 deletions(-)
 create mode 100644 src/backend/lib/bipartite_match.c
 create mode 100644 src/include/lib/bipartite_match.h
 create mode 100644 src/test/regress/expected/groupingsets.out
 create mode 100644 src/test/regress/sql/groupingsets.sql

diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 6abe3f0..c4d3ee5 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2267,6 +2267,7 @@ JumbleQuery(pgssJumbleState *jstate, Query *query)
 	JumbleExpr(jstate, (Node *) query->onConflict);
 	JumbleExpr(jstate, (Node *) query->returningList);
 	JumbleExpr(jstate, (Node *) query->groupClause);
+	JumbleExpr(jstate, (Node *) query->groupingSets);
 	JumbleExpr(jstate, query->havingQual);
 	JumbleExpr(jstate, (Node *) query->windowClause);
 	JumbleExpr(jstate, (Node *) query->distinctClause);
@@ -2397,6 +2398,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) expr->aggfilter);
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grpnode = (GroupingFunc *) node;
+
+				JumbleExpr(jstate, (Node *) grpnode->refs);
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2698,6 +2706,12 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				JumbleExpr(jstate, (Node *) lfirst(temp));
 			}
 			break;
+		case T_IntList:
+			foreach(temp, (List *) node)
+			{
+				APP_JUMB(lfirst_int(temp));
+			}
+			break;
 		case T_SortGroupClause:
 			{
 				SortGroupClause *sgc = (SortGroupClause *) node;
@@ -2708,6 +2722,13 @@ JumbleExpr(pgssJumbleState *jstate, Node *node)
 				APP_JUMB(sgc->nulls_first);
 			}
 			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gsnode = (GroupingSet *) node;
+
+				JumbleExpr(jstate, (Node *) gsnode->content);
+			}
+			break;
 		case T_WindowClause:
 			{
 				WindowClause *wc = (WindowClause *) node;
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b1e94d7..89a609f 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12228,7 +12228,9 @@ NULL baz</literallayout>(3 rows)</entry>
    <xref linkend="functions-aggregate-statistics-table">.
    The built-in ordered-set aggregate functions
    are listed in <xref linkend="functions-orderedset-table"> and
-   <xref linkend="functions-hypothetical-table">.
+   <xref linkend="functions-hypothetical-table">.  Grouping operations,
+   which are closely related to aggregate functions, are listed in
+   <xref linkend="functions-grouping-table">.
    The special syntax considerations for aggregate
    functions are explained in <xref linkend="syntax-aggregates">.
    Consult <xref linkend="tutorial-agg"> for additional introductory
@@ -13326,6 +13328,72 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
    to the rule specified in the <literal>ORDER BY</> clause.
   </para>
 
+  <table id="functions-grouping-table">
+   <title>Grouping Operations</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Function</entry>
+      <entry>Return Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+
+    <tbody>
+
+     <row>
+      <entry>
+       <indexterm>
+        <primary>GROUPING</primary>
+       </indexterm>
+       <function>GROUPING(<replaceable class="parameter">args...</replaceable>)</function>
+      </entry>
+      <entry>
+       <type>integer</type>
+      </entry>
+      <entry>
+       Integer bitmask indicating which arguments are not being included in the current
+       grouping set
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+   <para>
+    Grouping operations are used in conjunction with grouping sets (see
+    <xref linkend="queries-grouping-sets">) to distinguish result rows.  The
+    arguments to the <literal>GROUPING</> operation are not actually evaluated,
+    but they must match exactly expressions given in the <literal>GROUP BY</>
+    clause of the associated query level.  Bits are assigned with the rightmost
+    argument being the least-significant bit; each bit is 0 if the corresponding
+    expression is included in the grouping criteria of the grouping set generating
+    the result row, and 1 if it is not.  For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ make  | model | sales
+-------+-------+-------
+ Foo   | GT    |  10
+ Foo   | Tour  |  20
+ Bar   | City  |  15
+ Bar   | Sport |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</>
+ make  | model | grouping | sum
+-------+-------+----------+-----
+ Foo   | GT    |        0 | 10
+ Foo   | Tour  |        0 | 20
+ Bar   | City  |        0 | 15
+ Bar   | Sport |        0 | 5
+ Foo   |       |        1 | 30
+ Bar   |       |        1 | 20
+       |       |        3 | 50
+(7 rows)
+</screen>
+   </para>
+
  </sect1>
 
  <sect1 id="functions-window">
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 7dbad46..56419c7 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -1183,6 +1183,184 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit
    </para>
   </sect2>
 
+  <sect2 id="queries-grouping-sets">
+   <title><literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</></title>
+
+   <indexterm zone="queries-grouping-sets">
+    <primary>GROUPING SETS</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>CUBE</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>ROLLUP</primary>
+   </indexterm>
+   <indexterm zone="queries-grouping-sets">
+    <primary>grouping sets</primary>
+   </indexterm>
+
+   <para>
+    More complex grouping operations than those described above are possible
+    using the concept of <firstterm>grouping sets</>.  The data selected by
+    the <literal>FROM</> and <literal>WHERE</> clauses is grouped separately
+    by each specified grouping set, aggregates computed for each group just as
+    for simple <literal>GROUP BY</> clauses, and then the results returned.
+    For example:
+<screen>
+<prompt>=&gt;</> <userinput>SELECT * FROM items_sold;</>
+ brand | size | sales
+-------+------+-------
+ Foo   | L    |  10
+ Foo   | M    |  20
+ Bar   | M    |  15
+ Bar   | L    |  5
+(4 rows)
+
+<prompt>=&gt;</> <userinput>SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());</>
+ brand | size | sum
+-------+------+-----
+ Foo   |      |  30
+ Bar   |      |  20
+       | L    |  15
+       | M    |  35
+       |      |  50
+(5 rows)
+</screen>
+   </para>
+
+   <para>
+    Each sublist of <literal>GROUPING SETS</> may specify zero or more columns
+    or expressions and is interpreted the same way as though it were directly
+    in the <literal>GROUP BY</> clause.  An empty grouping set means that all
+    rows are aggregated down to a single group (which is output even if no
+    input rows were present), as described above for the case of aggregate
+    functions with no <literal>GROUP BY</> clause.
+   </para>
+
+   <para>
+    References to the grouping columns or expressions are replaced
+    by <literal>NULL</> values in result rows for grouping sets in which those
+    columns do not appear.  To distinguish which grouping a particular output
+    row resulted from, see <xref linkend="functions-grouping-table">.
+   </para>
+
+   <para>
+    A shorthand notation is provided for specifying two common types of grouping set.
+    A clause of the form
+<programlisting>
+ROLLUP ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... )
+</programlisting>
+    represents the given list of expressions and all prefixes of the list including
+    the empty list; thus it is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( <replaceable>e1</>, <replaceable>e2</>, <replaceable>e3</>, ... ),
+    ...
+    ( <replaceable>e1</>, <replaceable>e2</> )
+    ( <replaceable>e1</> )
+    ( )
+)
+</programlisting>
+    This is commonly used for analysis over hierarchical data; e.g. total
+    salary by department, division, and company-wide total.
+   </para>
+
+   <para>
+    A clause of the form
+<programlisting>
+CUBE ( <replaceable>e1</>, <replaceable>e2</>, ... )
+</programlisting>
+    represents the given list and all of its possible subsets (i.e. the power
+    set).  Thus
+<programlisting>
+CUBE ( a, b, c )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c ),
+    ( a, b    ),
+    ( a,    c ),
+    ( a       ),
+    (    b, c ),
+    (    b    ),
+    (       c ),
+    (         ),
+)
+</programlisting>
+   </para>
+
+   <para>
+    The individual elements of a <literal>CUBE</> or <literal>ROLLUP</>
+    clause may be either individual expressions, or sub-lists of elements in
+    parentheses.  In the latter case, the sub-lists are treated as single
+    units for the purposes of generating the individual grouping sets.
+    For example:
+<programlisting>
+CUBE ( (a,b), (c,d) )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b       )
+    (       c, d )
+    (            )
+)
+</programlisting>
+    and
+<programlisting>
+ROLLUP ( a, (b,c), d )
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUPING SETS (
+    ( a, b, c, d )
+    ( a, b, c    )
+    ( a          )
+    (            )
+)
+</programlisting>
+   </para>
+
+   <para>
+    The <literal>CUBE</> and <literal>ROLLUP</> constructs can be used either
+    directly in the <literal>GROUP BY</> clause, or nested inside a
+    <literal>GROUPING SETS</> clause.  If one <literal>GROUPING SETS</> clause
+    is nested inside another, the effect is the same as if all the elements of
+    the inner clause had been written directly in the outer clause.
+   </para>
+
+   <para>
+    If multiple grouping items are specified in a single <literal>GROUP BY</>
+    clause, then the final list of grouping sets is the cross product of the
+    individual items.  For example:
+<programlisting>
+GROUP BY a, CUBE(b,c), GROUPING SETS ((d), (e))
+</programlisting>
+    is equivalent to
+<programlisting>
+GROUP BY GROUPING SETS (
+  (a,b,c,d), (a,b,c,e),
+  (a,b,d),   (a,b,e),
+  (a,c,d),   (a,c,e),
+  (a,d),     (a,e)
+)
+</programlisting>
+   </para>
+
+  <note>
+   <para>
+    The construct <literal>(a,b)</> is normally recognized in expressions as
+    a <link linkend="sql-syntax-row-constructors">row constructor</link>.
+    Within the <literal>GROUP BY</> clause, this does not apply at the top
+    levels of expressions, and <literal>(a,b)</> is parsed as a list of
+    expressions as described above.  If for some reason you <emphasis>need</>
+    a row constructor in a grouping expression, use <literal>ROW(a,b)</>.
+   </para>
+  </note>
+  </sect2>
+
   <sect2 id="queries-window">
    <title>Window Function Processing</title>
 
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 2295f63..752d8a1 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -37,7 +37,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ * | <replaceable class="parameter">expression</replaceable> [ [ AS ] <replaceable class="parameter">output_name</replaceable> ] [, ...] ]
     [ FROM <replaceable class="parameter">from_item</replaceable> [, ...] ]
     [ WHERE <replaceable class="parameter">condition</replaceable> ]
-    [ GROUP BY <replaceable class="parameter">expression</replaceable> [, ...] ]
+    [ GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...] ]
     [ HAVING <replaceable class="parameter">condition</replaceable> [, ...] ]
     [ WINDOW <replaceable class="parameter">window_name</replaceable> AS ( <replaceable class="parameter">window_definition</replaceable> ) [, ...] ]
     [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <replaceable class="parameter">select</replaceable> ]
@@ -60,6 +60,15 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
                 [ WITH ORDINALITY ] [ [ AS ] <replaceable class="parameter">alias</replaceable> [ ( <replaceable class="parameter">column_alias</replaceable> [, ...] ) ] ]
     <replaceable class="parameter">from_item</replaceable> [ NATURAL ] <replaceable class="parameter">join_type</replaceable> <replaceable class="parameter">from_item</replaceable> [ ON <replaceable class="parameter">join_condition</replaceable> | USING ( <replaceable class="parameter">join_column</replaceable> [, ...] ) ]
 
+<phrase>and <replaceable class="parameter">grouping_element</replaceable> can be one of:</phrase>
+
+    ( )
+    <replaceable class="parameter">expression</replaceable>
+    ( <replaceable class="parameter">expression</replaceable> [, ...] )
+    ROLLUP ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    CUBE ( { <replaceable class="parameter">expression</replaceable> | ( <replaceable class="parameter">expression</replaceable> [, ...] ) } [, ...] )
+    GROUPING SETS ( <replaceable class="parameter">grouping_element</replaceable> [, ...] )
+
 <phrase>and <replaceable class="parameter">with_query</replaceable> is:</phrase>
 
     <replaceable class="parameter">with_query_name</replaceable> [ ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] AS ( <replaceable class="parameter">select</replaceable> | <replaceable class="parameter">values</replaceable> | <replaceable class="parameter">insert</replaceable> | <replaceable class="parameter">update</replaceable> | <replaceable class="parameter">delete</replaceable> )
@@ -621,23 +630,35 @@ WHERE <replaceable class="parameter">condition</replaceable>
    <para>
     The optional <literal>GROUP BY</literal> clause has the general form
 <synopsis>
-GROUP BY <replaceable class="parameter">expression</replaceable> [, ...]
+GROUP BY <replaceable class="parameter">grouping_element</replaceable> [, ...]
 </synopsis>
    </para>
 
    <para>
     <literal>GROUP BY</literal> will condense into a single row all
     selected rows that share the same values for the grouped
-    expressions.  <replaceable
-    class="parameter">expression</replaceable> can be an input column
-    name, or the name or ordinal number of an output column
-    (<command>SELECT</command> list item), or an arbitrary
+    expressions.  An <replaceable
+    class="parameter">expression</replaceable> used inside a
+    <replaceable class="parameter">grouping_element</replaceable>
+    can be an input column name, or the name or ordinal number of an
+    output column (<command>SELECT</command> list item), or an arbitrary
     expression formed from input-column values.  In case of ambiguity,
     a <literal>GROUP BY</literal> name will be interpreted as an
     input-column name rather than an output column name.
    </para>
 
    <para>
+    If any of <literal>GROUPING SETS</>, <literal>ROLLUP</> or
+    <literal>CUBE</> are present as grouping elements, then the
+    <literal>GROUP BY</> clause as a whole defines some number of
+    independent <replaceable>grouping sets</>.  The effect of this is
+    equivalent to constructing a <literal>UNION ALL</> between
+    subqueries with the individual grouping sets as their
+    <literal>GROUP BY</> clauses.  For further details on the handling
+    of grouping sets see <xref linkend="queries-grouping-sets">.
+   </para>
+
+   <para>
     Aggregate functions, if any are used, are computed across all rows
     making up each group, producing a separate value for each group.
     (If there are aggregate functions but no <literal>GROUP BY</literal>
diff --git a/src/backend/catalog/sql_features.txt b/src/backend/catalog/sql_features.txt
index cc0f8c4..0207c0f 100644
--- a/src/backend/catalog/sql_features.txt
+++ b/src/backend/catalog/sql_features.txt
@@ -467,9 +467,9 @@ T331	Basic roles			YES
 T332	Extended roles			NO	mostly supported
 T341	Overloading of SQL-invoked functions and procedures			YES	
 T351	Bracketed SQL comments (/*...*/ comments)			YES	
-T431	Extended grouping capabilities			NO	
-T432	Nested and concatenated GROUPING SETS			NO	
-T433	Multiargument GROUPING function			NO	
+T431	Extended grouping capabilities			YES	
+T432	Nested and concatenated GROUPING SETS			YES	
+T433	Multiargument GROUPING function			YES	
 T434	GROUP BY DISTINCT			NO	
 T441	ABS and MOD functions			YES	
 T461	Symmetric BETWEEN predicate			YES	
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index eeb8f19..f221fbc 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -82,6 +82,12 @@ static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
 					   ExplainState *es);
 static void show_agg_keys(AggState *astate, List *ancestors,
 			  ExplainState *es);
+static void show_grouping_sets(PlanState *planstate, Agg *agg,
+							   List *ancestors, ExplainState *es);
+static void show_grouping_set_keys(PlanState *planstate,
+								   Agg *aggnode, Sort *sortnode,
+								   List *context, bool useprefix,
+								   List *ancestors, ExplainState *es);
 static void show_group_keys(GroupState *gstate, List *ancestors,
 				ExplainState *es);
 static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -1830,18 +1836,116 @@ show_agg_keys(AggState *astate, List *ancestors,
 {
 	Agg		   *plan = (Agg *) astate->ss.ps.plan;
 
-	if (plan->numCols > 0)
+	if (plan->numCols > 0 || plan->groupingSets)
 	{
 		/* The key columns refer to the tlist of the child plan */
 		ancestors = lcons(astate, ancestors);
-		show_sort_group_keys(outerPlanState(astate), "Group Key",
-							 plan->numCols, plan->grpColIdx,
-							 NULL, NULL, NULL,
-							 ancestors, es);
+
+		if (plan->groupingSets)
+			show_grouping_sets(outerPlanState(astate), plan, ancestors, es);
+		else
+			show_sort_group_keys(outerPlanState(astate), "Group Key",
+								 plan->numCols, plan->grpColIdx,
+								 NULL, NULL, NULL,
+								 ancestors, es);
+
 		ancestors = list_delete_first(ancestors);
 	}
 }
 
+static void
+show_grouping_sets(PlanState *planstate, Agg *agg,
+				   List *ancestors, ExplainState *es)
+{
+	List	   *context;
+	bool		useprefix;
+	ListCell   *lc;
+
+	/* Set up deparsing context */
+	context = set_deparse_context_planstate(es->deparse_cxt,
+											(Node *) planstate,
+											ancestors);
+	useprefix = (list_length(es->rtable) > 1 || es->verbose);
+
+	ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
+
+	show_grouping_set_keys(planstate, agg, NULL,
+						   context, useprefix, ancestors, es);
+
+	foreach(lc, agg->chain)
+	{
+		Agg *aggnode = lfirst(lc);
+		Sort *sortnode = (Sort *) aggnode->plan.lefttree;
+
+		show_grouping_set_keys(planstate, aggnode, sortnode,
+							   context, useprefix, ancestors, es);
+	}
+
+	ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
+}
+
+static void
+show_grouping_set_keys(PlanState *planstate,
+					   Agg *aggnode, Sort *sortnode,
+					   List *context, bool useprefix,
+					   List *ancestors, ExplainState *es)
+{
+	Plan	   *plan = planstate->plan;
+	char	   *exprstr;
+	ListCell   *lc;
+	List	   *gsets = aggnode->groupingSets;
+	AttrNumber *keycols = aggnode->grpColIdx;
+
+	ExplainOpenGroup("Grouping Set", NULL, true, es);
+
+	if (sortnode)
+	{
+		show_sort_group_keys(planstate, "Sort Key",
+							 sortnode->numCols, sortnode->sortColIdx,
+							 sortnode->sortOperators, sortnode->collations,
+							 sortnode->nullsFirst,
+							 ancestors, es);
+		if (es->format == EXPLAIN_FORMAT_TEXT)
+			es->indent++;
+	}
+
+	ExplainOpenGroup("Group Keys", "Group Keys", false, es);
+
+	foreach(lc, gsets)
+	{
+		List	   *result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, (List *) lfirst(lc))
+		{
+			Index		i = lfirst_int(lc2);
+			AttrNumber	keyresno = keycols[i];
+			TargetEntry *target = get_tle_by_resno(plan->targetlist,
+												   keyresno);
+
+			if (!target)
+				elog(ERROR, "no tlist entry for key %d", keyresno);
+			/* Deparse the expression, showing any top-level cast */
+			exprstr = deparse_expression((Node *) target->expr, context,
+										 useprefix, true);
+
+			result = lappend(result, exprstr);
+		}
+
+		if (!result && es->format == EXPLAIN_FORMAT_TEXT)
+			ExplainPropertyText("Group Key", "()", es);
+		else
+			ExplainPropertyListNested("Group Key", result, es);
+	}
+
+	ExplainCloseGroup("Group Keys", "Group Keys", false, es);
+
+	if (sortnode && es->format == EXPLAIN_FORMAT_TEXT)
+		es->indent--;
+
+	ExplainCloseGroup("Grouping Set", NULL, true, es);
+}
+
 /*
  * Show the grouping keys for a Group node.
  */
@@ -2591,6 +2695,52 @@ ExplainPropertyList(const char *qlabel, List *data, ExplainState *es)
 }
 
 /*
+ * Explain a property that takes the form of a list of unlabeled items within
+ * another list.  "data" is a list of C strings.
+ */
+void
+ExplainPropertyListNested(const char *qlabel, List *data, ExplainState *es)
+{
+	ListCell   *lc;
+	bool		first = true;
+
+	switch (es->format)
+	{
+		case EXPLAIN_FORMAT_TEXT:
+		case EXPLAIN_FORMAT_XML:
+			ExplainPropertyList(qlabel, data, es);
+			return;
+
+		case EXPLAIN_FORMAT_JSON:
+			ExplainJSONLineEnding(es);
+			appendStringInfoSpaces(es->str, es->indent * 2);
+			appendStringInfoChar(es->str, '[');
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_json(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+
+		case EXPLAIN_FORMAT_YAML:
+			ExplainYAMLLineStarting(es);
+			appendStringInfoString(es->str, "- [");
+			foreach(lc, data)
+			{
+				if (!first)
+					appendStringInfoString(es->str, ", ");
+				escape_yaml(es->str, (const char *) lfirst(lc));
+				first = false;
+			}
+			appendStringInfoChar(es->str, ']');
+			break;
+	}
+}
+
+/*
  * Explain a simple property.
  *
  * If "numeric" is true, the value is a number (or other value that
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index e599411..d414e20 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -181,6 +181,9 @@ static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
 						bool *isNull, ExprDoneCond *isDone);
 static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
 					  bool *isNull, ExprDoneCond *isDone);
+static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						ExprContext *econtext,
+						bool *isNull, ExprDoneCond *isDone);
 
 
 /* ----------------------------------------------------------------
@@ -3016,6 +3019,44 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
 	return econtext->caseValue_datum;
 }
 
+/*
+ * ExecEvalGroupingFuncExpr
+ *
+ * Return a bitmask with a bit for each (unevaluated) argument expression
+ * (rightmost arg is least significant bit).
+ *
+ * A bit is set if the corresponding expression is NOT part of the set of
+ * grouping expressions in the current grouping set.
+ */
+static Datum
+ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
+						 ExprContext *econtext,
+						 bool *isNull,
+						 ExprDoneCond *isDone)
+{
+	int result = 0;
+	int attnum = 0;
+	Bitmapset *grouped_cols = gstate->aggstate->grouped_cols;
+	ListCell *lc;
+
+	if (isDone)
+		*isDone = ExprSingleResult;
+
+	*isNull = false;
+
+	foreach(lc, (gstate->clauses))
+	{
+		attnum = lfirst_int(lc);
+
+		result = result << 1;
+
+		if (!bms_is_member(attnum, grouped_cols))
+			result = result | 1;
+	}
+
+	return (Datum) result;
+}
+
 /* ----------------------------------------------------------------
  *		ExecEvalArray - ARRAY[] expressions
  * ----------------------------------------------------------------
@@ -4482,6 +4523,28 @@ ExecInitExpr(Expr *node, PlanState *parent)
 				state = (ExprState *) astate;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grp_node = (GroupingFunc *) node;
+				GroupingFuncExprState *grp_state = makeNode(GroupingFuncExprState);
+				Agg		   *agg = NULL;
+
+				if (!parent || !IsA(parent, AggState) || !IsA(parent->plan, Agg))
+					elog(ERROR, "parent of GROUPING is not Agg node");
+
+				grp_state->aggstate = (AggState *) parent;
+
+				agg = (Agg *) (parent->plan);
+
+				if (agg->groupingSets)
+					grp_state->clauses = grp_node->cols;
+				else
+					grp_state->clauses = NIL;
+
+				state = (ExprState *) grp_state;
+				state->evalfunc = (ExprStateEvalFunc) ExecEvalGroupingFuncExpr;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 0da8e53..3963408 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -642,9 +642,10 @@ get_last_attnums(Node *node, ProjectionInfo *projInfo)
 	/*
 	 * Don't examine the arguments or filters of Aggrefs or WindowFuncs,
 	 * because those do not represent expressions to be evaluated within the
-	 * overall targetlist's econtext.
+	 * overall targetlist's econtext.  GroupingFunc arguments are never
+	 * evaluated at all.
 	 */
-	if (IsA(node, Aggref))
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
 		return false;
 	if (IsA(node, WindowFunc))
 		return false;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index fcb6117..3e4c57a 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -45,15 +45,19 @@
  *	  needed to allow resolution of a polymorphic aggregate's result type.
  *
  *	  We compute aggregate input expressions and run the transition functions
- *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at
- *	  least once per input tuple, so when the transvalue datatype is
+ *	  in a temporary econtext (aggstate->tmpcontext).  This is reset at least
+ *	  once per input tuple, so when the transvalue datatype is
  *	  pass-by-reference, we have to be careful to copy it into a longer-lived
- *	  memory context, and free the prior value to avoid memory leakage.
- *	  We store transvalues in the memory context aggstate->aggcontext,
- *	  which is also used for the hashtable structures in AGG_HASHED mode.
- *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext)
- *	  is used to run finalize functions and compute the output tuple;
- *	  this context can be reset once per output tuple.
+ *	  memory context, and free the prior value to avoid memory leakage.  We
+ *	  store transvalues in another set of econtexts, aggstate->aggcontexts (one
+ *	  per grouping set, see below), which are also used for the hashtable
+ *	  structures in AGG_HASHED mode.  These econtexts are rescanned, not just
+ *	  reset, at group boundaries so that aggregate transition functions can
+ *	  register shutdown callbacks via AggRegisterCallback.
+ *
+ *	  The node's regular econtext (aggstate->ss.ps.ps_ExprContext) is used to
+ *	  run finalize functions and compute the output tuple; this context can be
+ *	  reset once per output tuple.
  *
  *	  The executor's AggState node is passed as the fmgr "context" value in
  *	  all transfunc and finalfunc calls.  It is not recommended that the
@@ -84,6 +88,36 @@
  *	  need some fallback logic to use this, since there's no Aggref node
  *	  for a window function.)
  *
+ *	  Grouping sets:
+ *
+ *	  A list of grouping sets which is structurally equivalent to a ROLLUP
+ *	  clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
+ *	  ordered data.  We do this by keeping a separate set of transition values
+ *	  for each grouping set being concurrently processed; for each input tuple
+ *	  we update them all, and on group boundaries we reset those states
+ *	  (starting at the front of the list) whose grouping values have changed
+ *	  (the list of grouping sets is ordered from most specific to least
+ *	  specific).
+ *
+ *	  Where more complex grouping sets are used, we break them down into
+ *	  "phases", where each phase has a different sort order.  During each
+ *	  phase but the last, the input tuples are additionally stored in a
+ *	  tuplesort which is keyed to the next phase's sort order; during each
+ *	  phase but the first, the input tuples are drawn from the previously
+ *	  sorted data.  (The sorting of the data for the first phase is handled by
+ *	  the planner, as it might be satisfied by underlying nodes.)
+ *
+ *	  From the perspective of aggregate transition and final functions, the
+ *	  only issue regarding grouping sets is this: a single call site (flinfo)
+ *	  of an aggregate function may be used for updating several different
+ *	  transition values in turn. So the function must not cache in the flinfo
+ *	  anything which logically belongs as part of the transition value (most
+ *	  importantly, the memory context in which the transition value exists).
+ *	  The support API functions (AggCheckCallContext, AggRegisterCallback) are
+ *	  sensitive to the grouping set for which the aggregate function is
+ *	  currently being called.
+ *
+ *	  TODO: AGG_HASHED doesn't support multiple grouping sets yet.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -241,9 +275,11 @@ typedef struct AggStatePerAggData
 	 * then at completion of the input tuple group, we scan the sorted values,
 	 * eliminate duplicates if needed, and run the transition function on the
 	 * rest.
+	 *
+	 * We need a separate tuplesort for each grouping set.
 	 */
 
-	Tuplesortstate *sortstate;	/* sort object, if DISTINCT or ORDER BY */
+	Tuplesortstate **sortstates;	/* sort objects, if DISTINCT or ORDER BY */
 
 	/*
 	 * This field is a pre-initialized FunctionCallInfo struct used for
@@ -287,6 +323,27 @@ typedef struct AggStatePerGroupData
 } AggStatePerGroupData;
 
 /*
+ * AggStatePerPhaseData - per-grouping-set-phase state
+ *
+ * Grouping sets are divided into "phases", where a single phase can be
+ * processed in one pass over the input. If there is more than one phase, then
+ * at the end of input from the current phase, state is reset and another pass
+ * taken over the data which has been re-sorted in the mean time.
+ *
+ * Accordingly, each phase specifies a list of grouping sets and group clause
+ * information, plus each phase after the first also has a sort order.
+ */
+typedef struct AggStatePerPhaseData
+{
+	int			numsets;		/* number of grouping sets (or 0) */
+	int		   *gset_lengths;	/* lengths of grouping sets */
+	Bitmapset **grouped_cols;   /* column groupings for rollup */
+	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
+	Agg		   *aggnode;		/* Agg node for phase data */
+	Sort	   *sortnode;		/* Sort node for input ordering for phase */
+} AggStatePerPhaseData;
+
+/*
  * To implement hashed aggregation, we need a hashtable that stores a
  * representative tuple and an array of AggStatePerGroup structs for each
  * distinct set of GROUP BY column values.  We compute the hash key from
@@ -302,9 +359,12 @@ typedef struct AggHashEntryData
 }	AggHashEntryData;
 
 
+static void initialize_phase(AggState *aggstate, int newphase);
+static TupleTableSlot *fetch_input_tuple(AggState *aggstate);
 static void initialize_aggregates(AggState *aggstate,
 					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup);
+					  AggStatePerGroup pergroup,
+					  int numReset);
 static void advance_transition_function(AggState *aggstate,
 							AggStatePerAgg peraggstate,
 							AggStatePerGroup pergroupstate);
@@ -319,6 +379,14 @@ static void finalize_aggregate(AggState *aggstate,
 				   AggStatePerAgg peraggstate,
 				   AggStatePerGroup pergroupstate,
 				   Datum *resultVal, bool *resultIsNull);
+static void prepare_projection_slot(AggState *aggstate,
+									TupleTableSlot *slot,
+									int currentSet);
+static void finalize_aggregates(AggState *aggstate,
+								AggStatePerAgg peragg,
+								AggStatePerGroup pergroup,
+								int currentSet);
+static TupleTableSlot *project_aggregates(AggState *aggstate);
 static Bitmapset *find_unaggregated_cols(AggState *aggstate);
 static bool find_unaggregated_cols_walker(Node *node, Bitmapset **colnos);
 static void build_hash_table(AggState *aggstate);
@@ -331,46 +399,135 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 
 
 /*
- * Initialize all aggregates for a new group of input values.
- *
- * When called, CurrentMemoryContext should be the per-query context.
+ * Switch to phase "newphase", which must either be 0 (to reset) or
+ * current_phase + 1. Juggle the tuplesorts accordingly.
  */
 static void
-initialize_aggregates(AggState *aggstate,
-					  AggStatePerAgg peragg,
-					  AggStatePerGroup pergroup)
+initialize_phase(AggState *aggstate, int newphase)
 {
-	int			aggno;
+	Assert(newphase == 0 || newphase == aggstate->current_phase + 1);
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	/*
+	 * Whatever the previous state, we're now done with whatever input
+	 * tuplesort was in use.
+	 */
+	if (aggstate->sort_in)
 	{
-		AggStatePerAgg peraggstate = &peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
+		tuplesort_end(aggstate->sort_in);
+		aggstate->sort_in = NULL;
+	}
 
+	if (newphase == 0)
+	{
 		/*
-		 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
+		 * Discard any existing output tuplesort.
 		 */
-		if (peraggstate->numSortCols > 0)
+		if (aggstate->sort_out)
 		{
-			/*
-			 * In case of rescan, maybe there could be an uncompleted sort
-			 * operation?  Clean it up if so.
-			 */
-			if (peraggstate->sortstate)
-				tuplesort_end(peraggstate->sortstate);
+			tuplesort_end(aggstate->sort_out);
+			aggstate->sort_out = NULL;
+		}
+	}
+	else
+	{
+		/*
+		 * The old output tuplesort becomes the new input one, and this is the
+		 * right time to actually sort it.
+		 */
+		aggstate->sort_in = aggstate->sort_out;
+		aggstate->sort_out = NULL;
+		Assert(aggstate->sort_in);
+		tuplesort_performsort(aggstate->sort_in);
+	}
 
-			/*
-			 * We use a plain Datum sorter when there's a single input column;
-			 * otherwise sort the full tuple.  (See comments for
-			 * process_ordered_aggregate_single.)
-			 */
-			peraggstate->sortstate =
-				(peraggstate->numInputs == 1) ?
+	/*
+	 * If this isn't the last phase, we need to sort appropriately for the next
+	 * phase in sequence.
+	 */
+	if (newphase < aggstate->numphases - 1)
+	{
+		Sort	   *sortnode = aggstate->phases[newphase+1].sortnode;
+		PlanState  *outerNode = outerPlanState(aggstate);
+		TupleDesc	tupDesc = ExecGetResultType(outerNode);
+
+		aggstate->sort_out = tuplesort_begin_heap(tupDesc,
+												  sortnode->numCols,
+												  sortnode->sortColIdx,
+												  sortnode->sortOperators,
+												  sortnode->collations,
+												  sortnode->nullsFirst,
+												  work_mem,
+												  false);
+	}
+
+	aggstate->current_phase = newphase;
+	aggstate->phase = &aggstate->phases[newphase];
+}
+
+/*
+ * Fetch a tuple from either the outer plan (for phase 0) or from the sorter
+ * populated by the previous phase.  Copy it to the sorter for the next phase
+ * if any.
+ */
+static TupleTableSlot *
+fetch_input_tuple(AggState *aggstate)
+{
+	TupleTableSlot *slot;
+
+	if (aggstate->sort_in)
+	{
+		if (!tuplesort_gettupleslot(aggstate->sort_in, true, aggstate->sort_slot))
+			return NULL;
+		slot = aggstate->sort_slot;
+	}
+	else
+		slot = ExecProcNode(outerPlanState(aggstate));
+
+	if (!TupIsNull(slot) && aggstate->sort_out)
+		tuplesort_puttupleslot(aggstate->sort_out, slot);
+
+	return slot;
+}
+
+/*
+ * (Re)Initialize an individual aggregate.
+ *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
+ * When called, CurrentMemoryContext should be the per-query context.
+ */
+static void
+initialize_aggregate(AggState *aggstate, AggStatePerAgg peraggstate,
+					 AggStatePerGroup pergroupstate)
+{
+	/*
+	 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
+	 */
+	if (peraggstate->numSortCols > 0)
+	{
+		/*
+		 * In case of rescan, maybe there could be an uncompleted sort
+		 * operation?  Clean it up if so.
+		 */
+		if (peraggstate->sortstates[aggstate->current_set])
+			tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+
+
+		/*
+		 * We use a plain Datum sorter when there's a single input column;
+		 * otherwise sort the full tuple.  (See comments for
+		 * process_ordered_aggregate_single.)
+		 */
+		if (peraggstate->numInputs == 1)
+			peraggstate->sortstates[aggstate->current_set] =
 				tuplesort_begin_datum(peraggstate->evaldesc->attrs[0]->atttypid,
 									  peraggstate->sortOperators[0],
 									  peraggstate->sortCollations[0],
 									  peraggstate->sortNullsFirst[0],
-									  work_mem, false) :
+									  work_mem, false);
+		else
+			peraggstate->sortstates[aggstate->current_set] =
 				tuplesort_begin_heap(peraggstate->evaldesc,
 									 peraggstate->numSortCols,
 									 peraggstate->sortColIdx,
@@ -378,41 +535,83 @@ initialize_aggregates(AggState *aggstate,
 									 peraggstate->sortCollations,
 									 peraggstate->sortNullsFirst,
 									 work_mem, false);
-		}
+	}
 
-		/*
-		 * (Re)set transValue to the initial value.
-		 *
-		 * Note that when the initial value is pass-by-ref, we must copy it
-		 * (into the aggcontext) since we will pfree the transValue later.
-		 */
-		if (peraggstate->initValueIsNull)
-			pergroupstate->transValue = peraggstate->initValue;
-		else
+	/*
+	 * (Re)set transValue to the initial value.
+	 *
+	 * Note that when the initial value is pass-by-ref, we must copy
+	 * it (into the aggcontext) since we will pfree the transValue
+	 * later.
+	 */
+	if (peraggstate->initValueIsNull)
+		pergroupstate->transValue = peraggstate->initValue;
+	else
+	{
+		MemoryContext oldContext;
+
+		oldContext = MemoryContextSwitchTo(
+			aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
+		pergroupstate->transValue = datumCopy(peraggstate->initValue,
+											  peraggstate->transtypeByVal,
+											  peraggstate->transtypeLen);
+		MemoryContextSwitchTo(oldContext);
+	}
+	pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+
+	/*
+	 * If the initial value for the transition state doesn't exist in
+	 * the pg_aggregate table then we will let the first non-NULL
+	 * value returned from the outer procNode become the initial
+	 * value. (This is useful for aggregates like max() and min().)
+	 * The noTransValue flag signals that we still need to do this.
+	 */
+	pergroupstate->noTransValue = peraggstate->initValueIsNull;
+}
+
+/*
+ * Initialize all aggregates for a new group of input values.
+ *
+ * If there are multiple grouping sets, we initialize only the first numReset
+ * of them (the grouping sets are ordered so that the most specific one, which
+ * is reset most often, is first). As a convenience, if numReset is < 1, we
+ * reinitialize all sets.
+ *
+ * When called, CurrentMemoryContext should be the per-query context.
+ */
+static void
+initialize_aggregates(AggState *aggstate,
+					  AggStatePerAgg peragg,
+					  AggStatePerGroup pergroup,
+					  int numReset)
+{
+	int			aggno;
+	int         numGroupingSets = Max(aggstate->phase->numsets, 1);
+	int         setno = 0;
+
+	if (numReset < 1)
+		numReset = numGroupingSets;
+
+	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	{
+		AggStatePerAgg peraggstate = &peragg[aggno];
+
+		for (setno = 0; setno < numReset; setno++)
 		{
-			MemoryContext oldContext;
+			AggStatePerGroup pergroupstate;
 
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
-			pergroupstate->transValue = datumCopy(peraggstate->initValue,
-												  peraggstate->transtypeByVal,
-												  peraggstate->transtypeLen);
-			MemoryContextSwitchTo(oldContext);
-		}
-		pergroupstate->transValueIsNull = peraggstate->initValueIsNull;
+			pergroupstate = &pergroup[aggno + (setno * (aggstate->numaggs))];
 
-		/*
-		 * If the initial value for the transition state doesn't exist in the
-		 * pg_aggregate table then we will let the first non-NULL value
-		 * returned from the outer procNode become the initial value. (This is
-		 * useful for aggregates like max() and min().) The noTransValue flag
-		 * signals that we still need to do this.
-		 */
-		pergroupstate->noTransValue = peraggstate->initValueIsNull;
+			aggstate->current_set = setno;
+
+			initialize_aggregate(aggstate, peraggstate, pergroupstate);
+		}
 	}
 }
 
 /*
- * Given new input value(s), advance the transition function of an aggregate.
+ * Given new input value(s), advance the transition function of one aggregate
+ * within one grouping set only (already set in aggstate->current_set)
  *
  * The new values (and null flags) have been preloaded into argument positions
  * 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
@@ -455,7 +654,7 @@ advance_transition_function(AggState *aggstate,
 			 * We must copy the datum into aggcontext if it is pass-by-ref. We
 			 * do not need to pfree the old transValue, since it's NULL.
 			 */
-			oldContext = MemoryContextSwitchTo(aggstate->aggcontext);
+			oldContext = MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			pergroupstate->transValue = datumCopy(fcinfo->arg[1],
 												  peraggstate->transtypeByVal,
 												  peraggstate->transtypeLen);
@@ -503,7 +702,7 @@ advance_transition_function(AggState *aggstate,
 	{
 		if (!fcinfo->isnull)
 		{
-			MemoryContextSwitchTo(aggstate->aggcontext);
+			MemoryContextSwitchTo(aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
 			newVal = datumCopy(newVal,
 							   peraggstate->transtypeByVal,
 							   peraggstate->transtypeLen);
@@ -530,11 +729,13 @@ static void
 advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 {
 	int			aggno;
+	int         setno = 0;
+	int         numGroupingSets = Max(aggstate->phase->numsets, 1);
+	int         numAggs = aggstate->numaggs;
 
-	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	for (aggno = 0; aggno < numAggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &aggstate->peragg[aggno];
-		AggStatePerGroup pergroupstate = &pergroup[aggno];
 		ExprState  *filter = peraggstate->aggrefstate->aggfilter;
 		int			numTransInputs = peraggstate->numTransInputs;
 		int			i;
@@ -578,13 +779,16 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 					continue;
 			}
 
-			/* OK, put the tuple into the tuplesort object */
-			if (peraggstate->numInputs == 1)
-				tuplesort_putdatum(peraggstate->sortstate,
-								   slot->tts_values[0],
-								   slot->tts_isnull[0]);
-			else
-				tuplesort_puttupleslot(peraggstate->sortstate, slot);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				/* OK, put the tuple into the tuplesort object */
+				if (peraggstate->numInputs == 1)
+					tuplesort_putdatum(peraggstate->sortstates[setno],
+									   slot->tts_values[0],
+									   slot->tts_isnull[0]);
+				else
+					tuplesort_puttupleslot(peraggstate->sortstates[setno], slot);
+			}
 		}
 		else
 		{
@@ -600,7 +804,14 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
 				fcinfo->argnull[i + 1] = slot->tts_isnull[i];
 			}
 
-			advance_transition_function(aggstate, peraggstate, pergroupstate);
+			for (setno = 0; setno < numGroupingSets; setno++)
+			{
+				AggStatePerGroup pergroupstate = &pergroup[aggno + (setno * numAggs)];
+
+				aggstate->current_set = setno;
+
+				advance_transition_function(aggstate, peraggstate, pergroupstate);
+			}
 		}
 	}
 }
@@ -623,6 +834,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
  * is around 300% faster.  (The speedup for by-reference types is less
  * but still noticeable.)
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -642,7 +856,7 @@ process_ordered_aggregate_single(AggState *aggstate,
 
 	Assert(peraggstate->numDistinctCols < 2);
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	/* Load the column into argument 1 (arg 0 will be transition value) */
 	newVal = fcinfo->arg + 1;
@@ -654,8 +868,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	 * pfree them when they are no longer needed.
 	 */
 
-	while (tuplesort_getdatum(peraggstate->sortstate, true,
-							  newVal, isNull))
+	while (tuplesort_getdatum(peraggstate->sortstates[aggstate->current_set],
+							  true, newVal, isNull))
 	{
 		/*
 		 * Clear and select the working context for evaluation of the equality
@@ -698,8 +912,8 @@ process_ordered_aggregate_single(AggState *aggstate,
 	if (!oldIsNull && !peraggstate->inputtypeByVal)
 		pfree(DatumGetPointer(oldVal));
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
@@ -709,6 +923,9 @@ process_ordered_aggregate_single(AggState *aggstate,
  * sort, read out the values in sorted order, and run the transition
  * function on each value (applying DISTINCT if appropriate).
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * When called, CurrentMemoryContext should be the per-query context.
  */
 static void
@@ -725,13 +942,14 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	bool		haveOldValue = false;
 	int			i;
 
-	tuplesort_performsort(peraggstate->sortstate);
+	tuplesort_performsort(peraggstate->sortstates[aggstate->current_set]);
 
 	ExecClearTuple(slot1);
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	while (tuplesort_gettupleslot(peraggstate->sortstate, true, slot1))
+	while (tuplesort_gettupleslot(peraggstate->sortstates[aggstate->current_set],
+								  true, slot1))
 	{
 		/*
 		 * Extract the first numTransInputs columns as datums to pass to the
@@ -779,13 +997,16 @@ process_ordered_aggregate_multi(AggState *aggstate,
 	if (slot2)
 		ExecClearTuple(slot2);
 
-	tuplesort_end(peraggstate->sortstate);
-	peraggstate->sortstate = NULL;
+	tuplesort_end(peraggstate->sortstates[aggstate->current_set]);
+	peraggstate->sortstates[aggstate->current_set] = NULL;
 }
 
 /*
  * Compute the final value of one aggregate.
  *
+ * This function handles only one grouping set (already set in
+ * aggstate->current_set).
+ *
  * The finalfunction will be run, and the result delivered, in the
  * output-tuple context; caller's CurrentMemoryContext does not matter.
  */
@@ -832,7 +1053,7 @@ finalize_aggregate(AggState *aggstate,
 		/* set up aggstate->curperagg for AggGetAggref() */
 		aggstate->curperagg = peraggstate;
 
-		InitFunctionCallInfoData(fcinfo, &(peraggstate->finalfn),
+		InitFunctionCallInfoData(fcinfo, &peraggstate->finalfn,
 								 numFinalArgs,
 								 peraggstate->aggCollation,
 								 (void *) aggstate, NULL);
@@ -882,6 +1103,154 @@ finalize_aggregate(AggState *aggstate,
 	MemoryContextSwitchTo(oldContext);
 }
 
+
+/*
+ * Prepare to finalize and project based on the specified representative tuple
+ * slot and grouping set.
+ *
+ * In the specified tuple slot, force to null all attributes that should be
+ * read as null in the context of the current grouping set.  Also stash the
+ * current group bitmap where GroupingExpr can get at it.
+ *
+ * This relies on three conditions:
+ *
+ * 1) Nothing is ever going to try and extract the whole tuple from this slot,
+ * only reference it in evaluations, which will only access individual
+ * attributes.
+ *
+ * 2) No system columns are going to need to be nulled. (If a system column is
+ * referenced in a group clause, it is actually projected in the outer plan
+ * tlist.)
+ *
+ * 3) Within a given phase, we never need to recover the value of an attribute
+ * once it has been set to null.
+ *
+ * Poking into the slot this way is a bit ugly, but the consensus is that the
+ * alternative was worse.
+ */
+static void
+prepare_projection_slot(AggState *aggstate, TupleTableSlot *slot, int currentSet)
+{
+	if (aggstate->phase->grouped_cols)
+	{
+		Bitmapset *grouped_cols = aggstate->phase->grouped_cols[currentSet];
+
+		aggstate->grouped_cols = grouped_cols;
+
+		if (slot->tts_isempty)
+		{
+			/*
+			 * Force all values to be NULL if working on an empty input tuple
+			 * (i.e. an empty grouping set for which no input rows were
+			 * supplied).
+			 */
+			ExecStoreAllNullTuple(slot);
+		}
+		else if (aggstate->all_grouped_cols)
+		{
+			ListCell   *lc;
+
+			/* all_grouped_cols is arranged in desc order */
+			slot_getsomeattrs(slot, linitial_int(aggstate->all_grouped_cols));
+
+			foreach(lc, aggstate->all_grouped_cols)
+			{
+				int attnum = lfirst_int(lc);
+
+				if (!bms_is_member(attnum, grouped_cols))
+					slot->tts_isnull[attnum - 1] = true;
+			}
+		}
+	}
+}
+
+/*
+ * Compute the final value of all aggregates for one group.
+ *
+ * This function handles only one grouping set at a time.
+ *
+ * Results are stored in the output econtext aggvalues/aggnulls.
+ */
+static void
+finalize_aggregates(AggState *aggstate,
+					AggStatePerAgg peragg,
+					AggStatePerGroup pergroup,
+					int currentSet)
+{
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+	Datum	   *aggvalues = econtext->ecxt_aggvalues;
+	bool	   *aggnulls = econtext->ecxt_aggnulls;
+	int			aggno;
+
+	Assert(currentSet == 0 ||
+		   ((Agg *) aggstate->ss.ps.plan)->aggstrategy != AGG_HASHED);
+
+	aggstate->current_set = currentSet;
+
+	for (aggno = 0; aggno < aggstate->numaggs; aggno++)
+	{
+		AggStatePerAgg peraggstate = &peragg[aggno];
+		AggStatePerGroup pergroupstate;
+
+		pergroupstate = &pergroup[aggno + (currentSet * (aggstate->numaggs))];
+
+		if (peraggstate->numSortCols > 0)
+		{
+			Assert(((Agg *) aggstate->ss.ps.plan)->aggstrategy != AGG_HASHED);
+
+			if (peraggstate->numInputs == 1)
+				process_ordered_aggregate_single(aggstate,
+												 peraggstate,
+												 pergroupstate);
+			else
+				process_ordered_aggregate_multi(aggstate,
+												peraggstate,
+												pergroupstate);
+		}
+
+		finalize_aggregate(aggstate, peraggstate, pergroupstate,
+						   &aggvalues[aggno], &aggnulls[aggno]);
+	}
+}
+
+/*
+ * Project the result of a group (whose aggs have already been calculated by
+ * finalize_aggregates). Returns the result slot, or NULL if no row is
+ * projected (suppressed by qual or by an empty SRF).
+ */
+static TupleTableSlot *
+project_aggregates(AggState *aggstate)
+{
+	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
+
+	/*
+	 * Check the qual (HAVING clause); if the group does not match, ignore
+	 * it.
+	 */
+	if (ExecQual(aggstate->ss.ps.qual, econtext, false))
+	{
+		/*
+		 * Form and return or store a projection tuple using the aggregate
+		 * results and the representative input tuple.
+		 */
+		ExprDoneCond isDone;
+		TupleTableSlot *result;
+
+		result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			aggstate->ss.ps.ps_TupFromTlist =
+				(isDone == ExprMultipleResult);
+			return result;
+		}
+	}
+	else
+		InstrCountFiltered1(aggstate, 1);
+
+	return NULL;
+}
+
 /*
  * find_unaggregated_cols
  *	  Construct a bitmapset of the column numbers of un-aggregated Vars
@@ -916,8 +1285,11 @@ find_unaggregated_cols_walker(Node *node, Bitmapset **colnos)
 		*colnos = bms_add_member(*colnos, var->varattno);
 		return false;
 	}
-	if (IsA(node, Aggref))		/* do not descend into aggregate exprs */
+	if (IsA(node, Aggref) || IsA(node, GroupingFunc))
+	{
+		/* do not descend into aggregate exprs */
 		return false;
+	}
 	return expression_tree_walker(node, find_unaggregated_cols_walker,
 								  (void *) colnos);
 }
@@ -942,11 +1314,11 @@ build_hash_table(AggState *aggstate)
 
 	aggstate->hashtable = BuildTupleHashTable(node->numCols,
 											  node->grpColIdx,
-											  aggstate->eqfunctions,
+											  aggstate->phase->eqfunctions,
 											  aggstate->hashfunctions,
 											  node->numGroups,
 											  entrysize,
-											  aggstate->aggcontext,
+											  aggstate->aggcontexts[0]->ecxt_per_tuple_memory,
 											  tmpmem);
 }
 
@@ -1057,7 +1429,7 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 	if (isnew)
 	{
 		/* initialize aggregates for new tuple group */
-		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup);
+		initialize_aggregates(aggstate, aggstate->peragg, entry->pergroup, 0);
 	}
 
 	return entry;
@@ -1079,6 +1451,8 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
 TupleTableSlot *
 ExecAgg(AggState *node)
 {
+	TupleTableSlot *result;
+
 	/*
 	 * Check to see if we're still projecting out tuples from a previous agg
 	 * tuple (because there is a function-returning-set in the projection
@@ -1086,7 +1460,6 @@ ExecAgg(AggState *node)
 	 */
 	if (node->ss.ps.ps_TupFromTlist)
 	{
-		TupleTableSlot *result;
 		ExprDoneCond isDone;
 
 		result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
@@ -1097,22 +1470,30 @@ ExecAgg(AggState *node)
 	}
 
 	/*
-	 * Exit if nothing left to do.  (We must do the ps_TupFromTlist check
-	 * first, because in some cases agg_done gets set before we emit the final
-	 * aggregate tuple, and we have to finish running SRFs for it.)
+	 * (We must do the ps_TupFromTlist check first, because in some cases
+	 * agg_done gets set before we emit the final aggregate tuple, and we have
+	 * to finish running SRFs for it.)
 	 */
-	if (node->agg_done)
-		return NULL;
-
-	/* Dispatch based on strategy */
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (!node->agg_done)
 	{
-		if (!node->table_filled)
-			agg_fill_hash_table(node);
-		return agg_retrieve_hash_table(node);
+		/* Dispatch based on strategy */
+		switch (node->phase->aggnode->aggstrategy)
+		{
+			case AGG_HASHED:
+				if (!node->table_filled)
+					agg_fill_hash_table(node);
+				result = agg_retrieve_hash_table(node);
+				break;
+			default:
+				result = agg_retrieve_direct(node);
+				break;
+		}
+
+		if (!TupIsNull(result))
+			return result;
 	}
-	else
-		return agg_retrieve_direct(node);
+
+	return NULL;
 }
 
 /*
@@ -1121,28 +1502,30 @@ ExecAgg(AggState *node)
 static TupleTableSlot *
 agg_retrieve_direct(AggState *aggstate)
 {
-	Agg		   *node = (Agg *) aggstate->ss.ps.plan;
-	PlanState  *outerPlan;
+	Agg		   *node = aggstate->phase->aggnode;
 	ExprContext *econtext;
 	ExprContext *tmpcontext;
-	Datum	   *aggvalues;
-	bool	   *aggnulls;
 	AggStatePerAgg peragg;
 	AggStatePerGroup pergroup;
 	TupleTableSlot *outerslot;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	TupleTableSlot *result;
+	bool		hasGroupingSets = aggstate->phase->numsets > 0;
+	int			numGroupingSets = Max(aggstate->phase->numsets, 1);
+	int			currentSet;
+	int			nextSetSize;
+	int			numReset;
+	int			i;
 
 	/*
 	 * get state info from node
+	 *
+	 * econtext is the per-output-tuple expression context
+	 * tmpcontext is the per-input-tuple expression context
 	 */
-	outerPlan = outerPlanState(aggstate);
-	/* econtext is the per-output-tuple expression context */
 	econtext = aggstate->ss.ps.ps_ExprContext;
-	aggvalues = econtext->ecxt_aggvalues;
-	aggnulls = econtext->ecxt_aggnulls;
-	/* tmpcontext is the per-input-tuple expression context */
 	tmpcontext = aggstate->tmpcontext;
+
 	peragg = aggstate->peragg;
 	pergroup = aggstate->pergroup;
 	firstSlot = aggstate->ss.ss_ScanTupleSlot;
@@ -1150,172 +1533,281 @@ agg_retrieve_direct(AggState *aggstate)
 	/*
 	 * We loop retrieving groups until we find one matching
 	 * aggstate->ss.ps.qual
+	 *
+	 * For grouping sets, we have the invariant that aggstate->projected_set is
+	 * either -1 (initial call) or the index (starting from 0) in gset_lengths
+	 * for the group we just completed (either by projecting a row or by
+	 * discarding it in the qual).
 	 */
 	while (!aggstate->agg_done)
 	{
 		/*
-		 * If we don't already have the first tuple of the new group, fetch it
-		 * from the outer plan.
+		 * Clear the per-output-tuple context for each group, as well as
+		 * aggcontext (which contains any pass-by-ref transvalues of the old
+		 * group).  Some aggregate functions store working state in child
+		 * contexts; those now get reset automatically without us needing to
+		 * do anything special.
+		 *
+		 * We use ReScanExprContext not just ResetExprContext because we want
+		 * any registered shutdown callbacks to be called.  That allows
+		 * aggregate functions to ensure they've cleaned up any non-memory
+		 * resources.
+		 */
+		ReScanExprContext(econtext);
+
+		/*
+		 * Determine how many grouping sets need to be reset at this boundary.
 		 */
-		if (aggstate->grp_firstTuple == NULL)
+		if (aggstate->projected_set >= 0 &&
+			aggstate->projected_set < numGroupingSets)
+			numReset = aggstate->projected_set + 1;
+		else
+			numReset = numGroupingSets;
+
+		/*
+		 * numReset can change on a phase boundary, but that's OK; we want to
+		 * reset the contexts used in _this_ phase, and later, after possibly
+		 * changing phase, initialize the right number of aggregates for the
+		 * _new_ phase.
+		 */
+
+		for (i = 0; i < numReset; i++)
+		{
+			ReScanExprContext(aggstate->aggcontexts[i]);
+		}
+
+		/*
+		 * Check if input is complete and there are no more groups to project
+		 * in this phase; move to next phase or mark as done.
+		 */
+		if (aggstate->input_done == true &&
+			aggstate->projected_set >= (numGroupingSets - 1))
 		{
-			outerslot = ExecProcNode(outerPlan);
-			if (!TupIsNull(outerslot))
+			if (aggstate->current_phase < aggstate->numphases - 1)
 			{
-				/*
-				 * Make a copy of the first input tuple; we will use this for
-				 * comparisons (in group mode) and for projection.
-				 */
-				aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+				initialize_phase(aggstate, aggstate->current_phase + 1);
+				aggstate->input_done = false;
+				aggstate->projected_set = -1;
+				numGroupingSets = Max(aggstate->phase->numsets, 1);
+				node = aggstate->phase->aggnode;
+				numReset = numGroupingSets;
 			}
 			else
 			{
-				/* outer plan produced no tuples at all */
 				aggstate->agg_done = true;
-				/* If we are grouping, we should produce no tuples too */
-				if (node->aggstrategy != AGG_PLAIN)
-					return NULL;
+				break;
 			}
 		}
 
 		/*
-		 * Clear the per-output-tuple context for each group, as well as
-		 * aggcontext (which contains any pass-by-ref transvalues of the old
-		 * group).  We also clear any child contexts of the aggcontext; some
-		 * aggregate functions store working state in such contexts.
-		 *
-		 * We use ReScanExprContext not just ResetExprContext because we want
-		 * any registered shutdown callbacks to be called.  That allows
-		 * aggregate functions to ensure they've cleaned up any non-memory
-		 * resources.
+		 * Get the number of columns in the next grouping set after the last
+		 * projected one (if any). This is the number of columns to compare to
+		 * see if we reached the boundary of that set too.
 		 */
-		ReScanExprContext(econtext);
-
-		MemoryContextResetAndDeleteChildren(aggstate->aggcontext);
+		if (aggstate->projected_set >= 0 &&
+			aggstate->projected_set < (numGroupingSets - 1))
+			nextSetSize = aggstate->phase->gset_lengths[aggstate->projected_set + 1];
+		else
+			nextSetSize = 0;
 
-		/*
-		 * Initialize working state for a new input tuple group
+		/*-
+		 * If a subgroup for the current grouping set is present, project it.
+		 *
+		 * We have a new group if:
+		 *  - we're out of input but haven't projected all grouping sets
+		 *    (checked above)
+		 * OR
+		 *    - we already projected a row that wasn't from the last grouping
+		 *      set
+		 *    AND
+		 *    - the next grouping set has at least one grouping column (since
+		 *      empty grouping sets project only once input is exhausted)
+		 *    AND
+		 *    - the previous and pending rows differ on the grouping columns
+		 *      of the next grouping set
 		 */
-		initialize_aggregates(aggstate, peragg, pergroup);
+		if (aggstate->input_done ||
+			(node->aggstrategy == AGG_SORTED &&
+			 aggstate->projected_set != -1 &&
+			 aggstate->projected_set < (numGroupingSets - 1) &&
+			 nextSetSize > 0 &&
+			 !execTuplesMatch(econtext->ecxt_outertuple,
+							  tmpcontext->ecxt_outertuple,
+							  nextSetSize,
+							  node->grpColIdx,
+							  aggstate->phase->eqfunctions,
+							  tmpcontext->ecxt_per_tuple_memory)))
+		{
+			aggstate->projected_set += 1;
 
-		if (aggstate->grp_firstTuple != NULL)
+			Assert(aggstate->projected_set < numGroupingSets);
+			Assert(nextSetSize > 0 || aggstate->input_done);
+		}
+		else
 		{
 			/*
-			 * Store the copied first input tuple in the tuple table slot
-			 * reserved for it.  The tuple will be deleted when it is cleared
-			 * from the slot.
+			 * We no longer care what group we just projected, the next
+			 * projection will always be the first (or only) grouping set
+			 * (unless the input proves to be empty).
 			 */
-			ExecStoreTuple(aggstate->grp_firstTuple,
-						   firstSlot,
-						   InvalidBuffer,
-						   true);
-			aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
-
-			/* set up for first advance_aggregates call */
-			tmpcontext->ecxt_outertuple = firstSlot;
+			aggstate->projected_set = 0;
 
 			/*
-			 * Process each outer-plan tuple, and then fetch the next one,
-			 * until we exhaust the outer plan or cross a group boundary.
+			 * If we don't already have the first tuple of the new group,
+			 * fetch it from the outer plan.
 			 */
-			for (;;)
+			if (aggstate->grp_firstTuple == NULL)
 			{
-				advance_aggregates(aggstate, pergroup);
-
-				/* Reset per-input-tuple context after each tuple */
-				ResetExprContext(tmpcontext);
-
-				outerslot = ExecProcNode(outerPlan);
-				if (TupIsNull(outerslot))
+				outerslot = fetch_input_tuple(aggstate);
+				if (!TupIsNull(outerslot))
 				{
-					/* no more outer-plan tuples available */
-					aggstate->agg_done = true;
-					break;
+					/*
+					 * Make a copy of the first input tuple; we will use this
+					 * for comparisons (in group mode) and for projection.
+					 */
+					aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
 				}
-				/* set up for next advance_aggregates call */
-				tmpcontext->ecxt_outertuple = outerslot;
-
-				/*
-				 * If we are grouping, check whether we've crossed a group
-				 * boundary.
-				 */
-				if (node->aggstrategy == AGG_SORTED)
+				else
 				{
-					if (!execTuplesMatch(firstSlot,
-										 outerslot,
-										 node->numCols, node->grpColIdx,
-										 aggstate->eqfunctions,
-										 tmpcontext->ecxt_per_tuple_memory))
+					/* outer plan produced no tuples at all */
+					if (hasGroupingSets)
 					{
 						/*
-						 * Save the first input tuple of the next group.
+						 * If there was no input at all, we need to project
+						 * rows only if there are grouping sets of size 0.
+						 * Note that this implies that there can't be any
+						 * references to ungrouped Vars, which would otherwise
+						 * cause issues with the empty output slot.
+						 *
+						 * XXX: This is no longer true, we currently deal with
+						 * this in finalize_aggregates().
 						 */
-						aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
-						break;
+						aggstate->input_done = true;
+
+						while (aggstate->phase->gset_lengths[aggstate->projected_set] > 0)
+						{
+							aggstate->projected_set += 1;
+							if (aggstate->projected_set >= numGroupingSets)
+							{
+								/*
+								 * We can't set agg_done here because we might
+								 * have more phases to do, even though the
+								 * input is empty. So we need to restart the
+								 * whole outer loop.
+								 */
+								break;
+							}
+						}
+
+						if (aggstate->projected_set >= numGroupingSets)
+							continue;
+					}
+					else
+					{
+						aggstate->agg_done = true;
+						/* If we are grouping, we should produce no tuples too */
+						if (node->aggstrategy != AGG_PLAIN)
+							return NULL;
 					}
 				}
 			}
-		}
 
-		/*
-		 * Use the representative input tuple for any references to
-		 * non-aggregated input columns in aggregate direct args, the node
-		 * qual, and the tlist.  (If we are not grouping, and there are no
-		 * input rows at all, we will come here with an empty firstSlot ...
-		 * but if not grouping, there can't be any references to
-		 * non-aggregated input columns, so no problem.)
-		 */
-		econtext->ecxt_outertuple = firstSlot;
-
-		/*
-		 * Done scanning input tuple group. Finalize each aggregate
-		 * calculation, and stash results in the per-output-tuple context.
-		 */
-		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
-		{
-			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
+			/*
+			 * Initialize working state for a new input tuple group.
+			 */
+			initialize_aggregates(aggstate, peragg, pergroup, numReset);
 
-			if (peraggstate->numSortCols > 0)
+			if (aggstate->grp_firstTuple != NULL)
 			{
-				if (peraggstate->numInputs == 1)
-					process_ordered_aggregate_single(aggstate,
-													 peraggstate,
-													 pergroupstate);
-				else
-					process_ordered_aggregate_multi(aggstate,
-													peraggstate,
-													pergroupstate);
-			}
+				/*
+				 * Store the copied first input tuple in the tuple table slot
+				 * reserved for it.  The tuple will be deleted when it is cleared
+				 * from the slot.
+				 */
+				ExecStoreTuple(aggstate->grp_firstTuple,
+							   firstSlot,
+							   InvalidBuffer,
+							   true);
+				aggstate->grp_firstTuple = NULL;	/* don't keep two pointers */
 
-			finalize_aggregate(aggstate, peraggstate, pergroupstate,
-							   &aggvalues[aggno], &aggnulls[aggno]);
-		}
+				/* set up for first advance_aggregates call */
+				tmpcontext->ecxt_outertuple = firstSlot;
 
-		/*
-		 * Check the qual (HAVING clause); if the group does not match, ignore
-		 * it and loop back to try to process another group.
-		 */
-		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
-		{
-			/*
-			 * Form and return a projection tuple using the aggregate results
-			 * and the representative input tuple.
-			 */
-			TupleTableSlot *result;
-			ExprDoneCond isDone;
+				/*
+				 * Process each outer-plan tuple, and then fetch the next one,
+				 * until we exhaust the outer plan or cross a group boundary.
+				 */
+				for (;;)
+				{
+					advance_aggregates(aggstate, pergroup);
 
-			result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
+					/* Reset per-input-tuple context after each tuple */
+					ResetExprContext(tmpcontext);
 
-			if (isDone != ExprEndResult)
-			{
-				aggstate->ss.ps.ps_TupFromTlist =
-					(isDone == ExprMultipleResult);
-				return result;
+					outerslot = fetch_input_tuple(aggstate);
+					if (TupIsNull(outerslot))
+					{
+						/* no more outer-plan tuples available */
+						if (hasGroupingSets)
+						{
+							aggstate->input_done = true;
+							break;
+						}
+						else
+						{
+							aggstate->agg_done = true;
+							break;
+						}
+					}
+					/* set up for next advance_aggregates call */
+					tmpcontext->ecxt_outertuple = outerslot;
+
+					/*
+					 * If we are grouping, check whether we've crossed a group
+					 * boundary.
+					 */
+					if (node->aggstrategy == AGG_SORTED)
+					{
+						if (!execTuplesMatch(firstSlot,
+											 outerslot,
+											 node->numCols,
+											 node->grpColIdx,
+											 aggstate->phase->eqfunctions,
+											 tmpcontext->ecxt_per_tuple_memory))
+						{
+							aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot);
+							break;
+						}
+					}
+				}
 			}
+
+			/*
+			 * Use the representative input tuple for any references to
+			 * non-aggregated input columns in aggregate direct args, the node
+			 * qual, and the tlist.  (If we are not grouping, and there are no
+			 * input rows at all, we will come here with an empty firstSlot ...
+			 * but if not grouping, there can't be any references to
+			 * non-aggregated input columns, so no problem.)
+			 */
+			econtext->ecxt_outertuple = firstSlot;
 		}
-		else
-			InstrCountFiltered1(aggstate, 1);
+
+		Assert(aggstate->projected_set >= 0);
+
+		currentSet = aggstate->projected_set;
+
+		prepare_projection_slot(aggstate, econtext->ecxt_outertuple, currentSet);
+
+		finalize_aggregates(aggstate, peragg, pergroup, currentSet);
+
+		/*
+		 * If there's no row to project right now, we must continue rather than
+		 * returning a null since there might be more groups.
+		 */
+		result = project_aggregates(aggstate);
+		if (result)
+			return result;
 	}
 
 	/* No more groups */
@@ -1328,16 +1820,15 @@ agg_retrieve_direct(AggState *aggstate)
 static void
 agg_fill_hash_table(AggState *aggstate)
 {
-	PlanState  *outerPlan;
 	ExprContext *tmpcontext;
 	AggHashEntry entry;
 	TupleTableSlot *outerslot;
 
 	/*
 	 * get state info from node
+	 *
+	 * tmpcontext is the per-input-tuple expression context
 	 */
-	outerPlan = outerPlanState(aggstate);
-	/* tmpcontext is the per-input-tuple expression context */
 	tmpcontext = aggstate->tmpcontext;
 
 	/*
@@ -1346,7 +1837,7 @@ agg_fill_hash_table(AggState *aggstate)
 	 */
 	for (;;)
 	{
-		outerslot = ExecProcNode(outerPlan);
+		outerslot = fetch_input_tuple(aggstate);
 		if (TupIsNull(outerslot))
 			break;
 		/* set up for advance_aggregates call */
@@ -1374,21 +1865,17 @@ static TupleTableSlot *
 agg_retrieve_hash_table(AggState *aggstate)
 {
 	ExprContext *econtext;
-	Datum	   *aggvalues;
-	bool	   *aggnulls;
 	AggStatePerAgg peragg;
 	AggStatePerGroup pergroup;
 	AggHashEntry entry;
 	TupleTableSlot *firstSlot;
-	int			aggno;
+	TupleTableSlot *result;
 
 	/*
 	 * get state info from node
 	 */
 	/* econtext is the per-output-tuple expression context */
 	econtext = aggstate->ss.ps.ps_ExprContext;
-	aggvalues = econtext->ecxt_aggvalues;
-	aggnulls = econtext->ecxt_aggnulls;
 	peragg = aggstate->peragg;
 	firstSlot = aggstate->ss.ss_ScanTupleSlot;
 
@@ -1428,19 +1915,7 @@ agg_retrieve_hash_table(AggState *aggstate)
 
 		pergroup = entry->pergroup;
 
-		/*
-		 * Finalize each aggregate calculation, and stash results in the
-		 * per-output-tuple context.
-		 */
-		for (aggno = 0; aggno < aggstate->numaggs; aggno++)
-		{
-			AggStatePerAgg peraggstate = &peragg[aggno];
-			AggStatePerGroup pergroupstate = &pergroup[aggno];
-
-			Assert(peraggstate->numSortCols == 0);
-			finalize_aggregate(aggstate, peraggstate, pergroupstate,
-							   &aggvalues[aggno], &aggnulls[aggno]);
-		}
+		finalize_aggregates(aggstate, peragg, pergroup, 0);
 
 		/*
 		 * Use the representative input tuple for any references to
@@ -1448,30 +1923,9 @@ agg_retrieve_hash_table(AggState *aggstate)
 		 */
 		econtext->ecxt_outertuple = firstSlot;
 
-		/*
-		 * Check the qual (HAVING clause); if the group does not match, ignore
-		 * it and loop back to try to process another group.
-		 */
-		if (ExecQual(aggstate->ss.ps.qual, econtext, false))
-		{
-			/*
-			 * Form and return a projection tuple using the aggregate results
-			 * and the representative input tuple.
-			 */
-			TupleTableSlot *result;
-			ExprDoneCond isDone;
-
-			result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
-
-			if (isDone != ExprEndResult)
-			{
-				aggstate->ss.ps.ps_TupFromTlist =
-					(isDone == ExprMultipleResult);
-				return result;
-			}
-		}
-		else
-			InstrCountFiltered1(aggstate, 1);
+		result = project_aggregates(aggstate);
+		if (result)
+			return result;
 	}
 
 	/* No more groups */
@@ -1494,7 +1948,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExprContext *econtext;
 	int			numaggs,
 				aggno;
+	int			phase;
 	ListCell   *l;
+	Bitmapset  *all_grouped_cols = NULL;
+	int			numGroupingSets = 1;
+	int			numPhases;
+	int			currentsortno = 0;
+	int			i = 0;
+	int			j = 0;
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
@@ -1508,38 +1969,66 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
-	aggstate->eqfunctions = NULL;
+	aggstate->maxsets = 0;
 	aggstate->hashfunctions = NULL;
+	aggstate->projected_set = -1;
+	aggstate->current_set = 0;
 	aggstate->peragg = NULL;
 	aggstate->curperagg = NULL;
 	aggstate->agg_done = false;
+	aggstate->input_done = false;
 	aggstate->pergroup = NULL;
 	aggstate->grp_firstTuple = NULL;
 	aggstate->hashtable = NULL;
+	aggstate->sort_in = NULL;
+	aggstate->sort_out = NULL;
 
 	/*
-	 * Create expression contexts.  We need two, one for per-input-tuple
-	 * processing and one for per-output-tuple processing.  We cheat a little
-	 * by using ExecAssignExprContext() to build both.
+	 * Calculate the maximum number of grouping sets in any phase; this
+	 * determines the size of some allocations.
 	 */
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
-	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
-	ExecAssignExprContext(estate, &aggstate->ss.ps);
+	if (node->groupingSets)
+	{
+		Assert(node->aggstrategy != AGG_HASHED);
+
+		numGroupingSets = list_length(node->groupingSets);
+
+		foreach(l, node->chain)
+		{
+			Agg	   *agg = lfirst(l);
+
+			numGroupingSets = Max(numGroupingSets, list_length(agg->groupingSets));
+		}
+	}
+
+	aggstate->maxsets = numGroupingSets;
+	aggstate->numphases = numPhases = 1 + list_length(node->chain);
+
+	aggstate->aggcontexts = (ExprContext **) palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
-	 * We also need a long-lived memory context for holding hashtable data
-	 * structures and transition values.  NOTE: the details of what is stored
-	 * in aggcontext and what is stored in the regular per-query memory
-	 * context are driven by a simple decision: we want to reset the
-	 * aggcontext at group boundaries (if not hashing) and in ExecReScanAgg to
-	 * recover no-longer-wanted space.
+	 * Create expression contexts.  We need three or more, one for
+	 * per-input-tuple processing, one for per-output-tuple processing, and one
+	 * for each grouping set.  The per-tuple memory context of the
+	 * per-grouping-set ExprContexts (aggcontexts) replaces the standalone
+	 * memory context formerly used to hold transition values.  We cheat a
+	 * little by using ExecAssignExprContext() to build all of them.
+	 *
+	 * NOTE: the details of what is stored in aggcontexts and what is stored in
+	 * the regular per-query memory context are driven by a simple decision: we
+	 * want to reset the aggcontext at group boundaries (if not hashing) and in
+	 * ExecReScanAgg to recover no-longer-wanted space.
 	 */
-	aggstate->aggcontext =
-		AllocSetContextCreate(CurrentMemoryContext,
-							  "AggContext",
-							  ALLOCSET_DEFAULT_MINSIZE,
-							  ALLOCSET_DEFAULT_INITSIZE,
-							  ALLOCSET_DEFAULT_MAXSIZE);
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
+	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
+
+	for (i = 0; i < numGroupingSets; ++i)
+	{
+		ExecAssignExprContext(estate, &aggstate->ss.ps);
+		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
+	}
+
+	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * tuple table initialization
@@ -1547,6 +2036,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecInitScanTupleSlot(estate, &aggstate->ss);
 	ExecInitResultTupleSlot(estate, &aggstate->ss.ps);
 	aggstate->hashslot = ExecInitExtraTupleSlot(estate);
+	aggstate->sort_slot = ExecInitExtraTupleSlot(estate);
 
 	/*
 	 * initialize child expressions
@@ -1565,7 +2055,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 					 (PlanState *) aggstate);
 
 	/*
-	 * initialize child nodes
+	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
@@ -1579,6 +2069,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	 * initialize source tuple type.
 	 */
 	ExecAssignScanTypeFromOuterPlan(&aggstate->ss);
+	if (node->chain)
+		ExecSetSlotDescriptor(aggstate->sort_slot,
+							  aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor);
 
 	/*
 	 * Initialize result tuple type and projection info.
@@ -1606,24 +2099,105 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	}
 
 	/*
-	 * If we are grouping, precompute fmgr lookup data for inner loop. We need
-	 * both equality and hashing functions to do it by hashing, but only
-	 * equality if not hashing.
+	 * For each phase, prepare grouping set data and fmgr lookup data for
+	 * compare functions.  Accumulate all_grouped_cols in passing.
 	 */
-	if (node->numCols > 0)
+
+	aggstate->phases = palloc0(numPhases * sizeof(AggStatePerPhaseData));
+
+	for (phase = 0; phase < numPhases; ++phase)
 	{
-		if (node->aggstrategy == AGG_HASHED)
-			execTuplesHashPrepare(node->numCols,
-								  node->grpOperators,
-								  &aggstate->eqfunctions,
-								  &aggstate->hashfunctions);
+		AggStatePerPhase phasedata = &aggstate->phases[phase];
+		Agg *aggnode;
+		Sort *sortnode;
+		int num_sets;
+
+		if (phase > 0)
+		{
+			aggnode = list_nth(node->chain, phase-1);
+			sortnode = (Sort *) aggnode->plan.lefttree;
+			Assert(IsA(sortnode, Sort));
+		}
+		else
+		{
+			aggnode = node;
+			sortnode = NULL;
+		}
+
+		phasedata->numsets = num_sets = list_length(aggnode->groupingSets);
+
+		if (num_sets)
+		{
+			phasedata->gset_lengths = palloc(num_sets * sizeof(int));
+			phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
+
+			i = 0;
+			foreach(l, aggnode->groupingSets)
+			{
+				int current_length = list_length(lfirst(l));
+				Bitmapset *cols = NULL;
+
+				/* planner forces this to be correct */
+				for (j = 0; j < current_length; ++j)
+					cols = bms_add_member(cols, aggnode->grpColIdx[j]);
+
+				phasedata->grouped_cols[i] = cols;
+				phasedata->gset_lengths[i] = current_length;
+				++i;
+			}
+
+			all_grouped_cols = bms_add_members(all_grouped_cols,
+											   phasedata->grouped_cols[0]);
+		}
 		else
-			aggstate->eqfunctions =
-				execTuplesMatchPrepare(node->numCols,
-									   node->grpOperators);
+		{
+			Assert(phase == 0);
+
+			phasedata->gset_lengths = NULL;
+			phasedata->grouped_cols = NULL;
+		}
+
+		/*
+		 * If we are grouping, precompute fmgr lookup data for inner loop.
+		 */
+		if (aggnode->aggstrategy == AGG_SORTED)
+		{
+			Assert(aggnode->numCols > 0);
+
+			phasedata->eqfunctions =
+				execTuplesMatchPrepare(aggnode->numCols,
+									   aggnode->grpOperators);
+		}
+
+		phasedata->aggnode = aggnode;
+		phasedata->sortnode = sortnode;
 	}
 
 	/*
+	 * Convert all_grouped_cols to a descending-order list.
+	 */
+	i = -1;
+	while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+		aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+
+	/*
+	 * Hashing can only appear in the initial phase.
+	 */
+
+	if (node->aggstrategy == AGG_HASHED)
+		execTuplesHashPrepare(node->numCols,
+							  node->grpOperators,
+							  &aggstate->phases[0].eqfunctions,
+							  &aggstate->hashfunctions);
+
+	/*
+	 * Initialize current phase-dependent values to initial phase
+	 */
+
+	aggstate->current_phase = 0;
+	initialize_phase(aggstate, 0);
+
+	/*
 	 * Set up aggregate-result storage in the output expr context, and also
 	 * allocate my private per-agg working storage
 	 */
@@ -1645,7 +2219,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		AggStatePerGroup pergroup;
 
-		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) * numaggs);
+		pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
+											  * numaggs
+											  * numGroupingSets);
+
 		aggstate->pergroup = pergroup;
 	}
 
@@ -1708,7 +2285,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		/* Begin filling in the peraggstate data */
 		peraggstate->aggrefstate = aggrefstate;
 		peraggstate->aggref = aggref;
-		peraggstate->sortstate = NULL;
+		peraggstate->sortstates = (Tuplesortstate**) palloc0(sizeof(Tuplesortstate*) * numGroupingSets);
+
+		for (currentsortno = 0; currentsortno < numGroupingSets; currentsortno++)
+			peraggstate->sortstates[currentsortno] = NULL;
 
 		/* Fetch the pg_aggregate row */
 		aggTuple = SearchSysCache1(AGGFNOID,
@@ -2016,31 +2596,41 @@ ExecEndAgg(AggState *node)
 {
 	PlanState  *outerPlan;
 	int			aggno;
+	int			numGroupingSets = Max(node->maxsets, 1);
+	int			setno;
 
 	/* Make sure we have closed any open tuplesorts */
+
+	if (node->sort_in)
+		tuplesort_end(node->sort_in);
+	if (node->sort_out)
+		tuplesort_end(node->sort_out);
+
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
 		AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			if (peraggstate->sortstates[setno])
+				tuplesort_end(peraggstate->sortstates[setno]);
+		}
 	}
 
 	/* And ensure any agg shutdown callbacks have been called */
-	ReScanExprContext(node->ss.ps.ps_ExprContext);
+	for (setno = 0; setno < numGroupingSets; setno++)
+		ReScanExprContext(node->aggcontexts[setno]);
 
 	/*
-	 * Free both the expr contexts.
+	 * We don't actually free any ExprContexts here (see comment in
+	 * ExecFreeExprContext), just unlinking the output one from the plan node
+	 * suffices.
 	 */
 	ExecFreeExprContext(&node->ss.ps);
-	node->ss.ps.ps_ExprContext = node->tmpcontext;
-	ExecFreeExprContext(&node->ss.ps);
 
 	/* clean up tuple table */
 	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
-	MemoryContextDelete(node->aggcontext);
-
 	outerPlan = outerPlanState(node);
 	ExecEndNode(outerPlan);
 }
@@ -2050,13 +2640,16 @@ ExecReScanAgg(AggState *node)
 {
 	ExprContext *econtext = node->ss.ps.ps_ExprContext;
 	PlanState	*outerPlan = outerPlanState(node);
+	Agg		   *aggnode = (Agg *) node->ss.ps.plan;
 	int			aggno;
+	int         numGroupingSets = Max(node->maxsets, 1);
+	int         setno;
 
 	node->agg_done = false;
 
 	node->ss.ps.ps_TupFromTlist = false;
 
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/*
 		 * In the hashed case, if we haven't yet built the hash table then we
@@ -2082,14 +2675,34 @@ ExecReScanAgg(AggState *node)
 	/* Make sure we have closed any open tuplesorts */
 	for (aggno = 0; aggno < node->numaggs; aggno++)
 	{
-		AggStatePerAgg peraggstate = &node->peragg[aggno];
+		for (setno = 0; setno < numGroupingSets; setno++)
+		{
+			AggStatePerAgg peraggstate = &node->peragg[aggno];
 
-		if (peraggstate->sortstate)
-			tuplesort_end(peraggstate->sortstate);
-		peraggstate->sortstate = NULL;
+			if (peraggstate->sortstates[setno])
+			{
+				tuplesort_end(peraggstate->sortstates[setno]);
+				peraggstate->sortstates[setno] = NULL;
+			}
+		}
 	}
 
-	/* We don't need to ReScanExprContext here; ExecReScan already did it */
+	/*
+	 * We don't need to ReScanExprContext the output tuple context here;
+	 * ExecReScan already did it. But we do need to reset our per-grouping-set
+	 * contexts, which may have transvalues stored in them. (We use rescan
+	 * rather than just reset because transfns may have registered callbacks
+	 * that need to be run now.)
+	 *
+	 * Note that with AGG_HASHED, the hash table is allocated in a sub-context
+	 * of the aggcontext. This used to be an issue, but now, resetting a
+	 * context automatically deletes sub-contexts too.
+	 */
+
+	for (setno = 0; setno < numGroupingSets; setno++)
+	{
+		ReScanExprContext(node->aggcontexts[setno]);
+	}
 
 	/* Release first tuple of group, if we have made a copy */
 	if (node->grp_firstTuple != NULL)
@@ -2097,21 +2710,13 @@ ExecReScanAgg(AggState *node)
 		heap_freetuple(node->grp_firstTuple);
 		node->grp_firstTuple = NULL;
 	}
+	ExecClearTuple(node->ss.ss_ScanTupleSlot);
 
 	/* Forget current agg values */
 	MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
 	MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
 
-	/*
-	 * Release all temp storage. Note that with AGG_HASHED, the hash table is
-	 * allocated in a sub-context of the aggcontext. We're going to rebuild
-	 * the hash table from scratch, so we need to use
-	 * MemoryContextResetAndDeleteChildren() to avoid leaking the old hash
-	 * table's memory context header.
-	 */
-	MemoryContextResetAndDeleteChildren(node->aggcontext);
-
-	if (((Agg *) node->ss.ps.plan)->aggstrategy == AGG_HASHED)
+	if (aggnode->aggstrategy == AGG_HASHED)
 	{
 		/* Rebuild an empty hash table */
 		build_hash_table(node);
@@ -2123,13 +2728,15 @@ ExecReScanAgg(AggState *node)
 		 * Reset the per-group state (in particular, mark transvalues null)
 		 */
 		MemSet(node->pergroup, 0,
-			   sizeof(AggStatePerGroupData) * node->numaggs);
+			   sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets);
+
+		/* reset to phase 0 */
+		initialize_phase(node, 0);
+
+		node->input_done = false;
+		node->projected_set = -1;
 	}
 
-	/*
-	 * if chgParam of subnode is not null then plan will be re-scanned by
-	 * first ExecProcNode.
-	 */
 	if (outerPlan->chgParam == NULL)
 		ExecReScan(outerPlan);
 }
@@ -2151,8 +2758,11 @@ ExecReScanAgg(AggState *node)
  * values could conceivably appear in future.)
  *
  * If aggcontext isn't NULL, the function also stores at *aggcontext the
- * identity of the memory context that aggregate transition values are
- * being stored in.
+ * identity of the memory context that aggregate transition values are being
+ * stored in.  Note that the same aggregate call site (flinfo) may be called
+ * interleaved on different transition values in different contexts, so it's
+ * not kosher to cache aggcontext under fn_extra.  It is, however, kosher to
+ * cache it in the transvalue itself (for internal-type transvalues).
  */
 int
 AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
@@ -2160,7 +2770,11 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		if (aggcontext)
-			*aggcontext = ((AggState *) fcinfo->context)->aggcontext;
+		{
+			AggState    *aggstate = ((AggState *) fcinfo->context);
+			ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
+			*aggcontext = cxt->ecxt_per_tuple_memory;
+		}
 		return AGG_CONTEXT_AGGREGATE;
 	}
 	if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
@@ -2244,8 +2858,9 @@ AggRegisterCallback(FunctionCallInfo fcinfo,
 	if (fcinfo->context && IsA(fcinfo->context, AggState))
 	{
 		AggState   *aggstate = (AggState *) fcinfo->context;
+		ExprContext *cxt  = aggstate->aggcontexts[aggstate->current_set];
 
-		RegisterExprContextCallback(aggstate->ss.ps.ps_ExprContext, func, arg);
+		RegisterExprContextCallback(cxt, func, arg);
 
 		return;
 	}
diff --git a/src/backend/lib/Makefile b/src/backend/lib/Makefile
index fe4781a..2d2ba84 100644
--- a/src/backend/lib/Makefile
+++ b/src/backend/lib/Makefile
@@ -12,6 +12,7 @@ subdir = src/backend/lib
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = ilist.o binaryheap.o hyperloglog.o pairingheap.o rbtree.o stringinfo.o
+OBJS = binaryheap.o bipartite_match.o hyperloglog.o ilist.o pairingheap.o \
+       rbtree.o stringinfo.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/lib/bipartite_match.c b/src/backend/lib/bipartite_match.c
new file mode 100644
index 0000000..57d6d54
--- /dev/null
+++ b/src/backend/lib/bipartite_match.c
@@ -0,0 +1,161 @@
+/*-------------------------------------------------------------------------
+ *
+ * bipartite_match.c
+ *	  Hopcroft-Karp maximum cardinality algorithm for bipartite graphs
+ *
+ * This implementation is based on pseudocode found at:
+ *
+ * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016
+ *
+ * Copyright (c) 2015, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/lib/bipartite_match.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <math.h>
+#include <limits.h>
+
+#include "lib/bipartite_match.h"
+#include "miscadmin.h"
+#include "utils/palloc.h"
+
+static bool hk_breadth_search(BipartiteMatchState *state);
+static bool hk_depth_search(BipartiteMatchState *state, int u, int depth);
+
+/*
+ * Given the size of U and V, where each is indexed 1..size, and an adjacency
+ * list, perform the matching and return the resulting state.
+ */
+BipartiteMatchState *
+BipartiteMatch(int u_size, int v_size, short **adjacency)
+{
+	BipartiteMatchState *state = palloc(sizeof(BipartiteMatchState));
+
+	Assert(u_size < SHRT_MAX);
+	Assert(v_size < SHRT_MAX);
+
+	state->u_size = u_size;
+	state->v_size = v_size;
+	state->matching = 0;
+	state->adjacency = adjacency;
+	state->pair_uv = palloc0((u_size + 1) * sizeof(short));
+	state->pair_vu = palloc0((v_size + 1) * sizeof(short));
+	state->distance = palloc((u_size + 1) * sizeof(float));
+	state->queue = palloc((u_size + 2) * sizeof(short));
+
+	while (hk_breadth_search(state))
+	{
+		int		u;
+
+		for (u = 1; u <= u_size; ++u)
+			if (state->pair_uv[u] == 0)
+				if (hk_depth_search(state, u, 1))
+					state->matching++;
+
+		CHECK_FOR_INTERRUPTS();		/* just in case */
+	}
+
+	return state;
+}
+
+/*
+ * Free a state returned by BipartiteMatch, except for the original adjacency
+ * list, which is owned by the caller. This only frees memory, so it's optional.
+ */
+void
+BipartiteMatchFree(BipartiteMatchState *state)
+{
+	/* adjacency matrix is treated as owned by the caller */
+	pfree(state->pair_uv);
+	pfree(state->pair_vu);
+	pfree(state->distance);
+	pfree(state->queue);
+	pfree(state);
+}
+
+static bool
+hk_breadth_search(BipartiteMatchState *state)
+{
+	int			usize = state->u_size;
+	short	   *queue = state->queue;
+	float	   *distance = state->distance;
+	int			qhead = 0;		/* we never enqueue any node more than once */
+	int			qtail = 0;		/* so don't have to worry about wrapping */
+	int			u;
+
+	distance[0] = INFINITY;
+
+	for (u = 1; u <= usize; ++u)
+	{
+		if (state->pair_uv[u] == 0)
+		{
+			distance[u] = 0;
+			queue[qhead++] = u;
+		}
+		else
+			distance[u] = INFINITY;
+	}
+
+	while (qtail < qhead)
+	{
+		u = queue[qtail++];
+
+		if (distance[u] < distance[0])
+		{
+			short  *u_adj = state->adjacency[u];
+			int		i = u_adj ? u_adj[0] : 0;
+
+			for (; i > 0; --i)
+			{
+				int	u_next = state->pair_vu[u_adj[i]];
+
+				if (isinf(distance[u_next]))
+				{
+					distance[u_next] = 1 + distance[u];
+					queue[qhead++] = u_next;
+					Assert(qhead <= usize+2);
+				}
+			}
+		}
+	}
+
+	return !isinf(distance[0]);
+}
+
+static bool
+hk_depth_search(BipartiteMatchState *state, int u, int depth)
+{
+	float	   *distance = state->distance;
+	short	   *pair_uv = state->pair_uv;
+	short	   *pair_vu = state->pair_vu;
+	short	   *u_adj = state->adjacency[u];
+	int			i = u_adj ? u_adj[0] : 0;
+
+	if (u == 0)
+		return true;
+
+	if ((depth % 8) == 0)
+		check_stack_depth();
+
+	for (; i > 0; --i)
+	{
+		int		v = u_adj[i];
+
+		if (distance[pair_vu[v]] == distance[u] + 1)
+		{
+			if (hk_depth_search(state, pair_vu[v], depth+1))
+			{
+				pair_vu[v] = u;
+				pair_uv[u] = v;
+				return true;
+			}
+		}
+	}
+
+	distance[u] = INFINITY;
+	return false;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 25839ee..327ca67 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -823,6 +823,8 @@ _copyAgg(const Agg *from)
 		COPY_POINTER_FIELD(grpOperators, from->numCols * sizeof(Oid));
 	}
 	COPY_SCALAR_FIELD(numGroups);
+	COPY_NODE_FIELD(groupingSets);
+	COPY_NODE_FIELD(chain);
 
 	return newnode;
 }
@@ -1193,6 +1195,23 @@ _copyAggref(const Aggref *from)
 }
 
 /*
+ * _copyGroupingFunc
+ */
+static GroupingFunc *
+_copyGroupingFunc(const GroupingFunc *from)
+{
+	GroupingFunc	   *newnode = makeNode(GroupingFunc);
+
+	COPY_NODE_FIELD(args);
+	COPY_NODE_FIELD(refs);
+	COPY_NODE_FIELD(cols);
+	COPY_SCALAR_FIELD(agglevelsup);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
+/*
  * _copyWindowFunc
  */
 static WindowFunc *
@@ -2135,6 +2154,18 @@ _copySortGroupClause(const SortGroupClause *from)
 	return newnode;
 }
 
+static GroupingSet *
+_copyGroupingSet(const GroupingSet *from)
+{
+	GroupingSet		   *newnode = makeNode(GroupingSet);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(content);
+	COPY_LOCATION_FIELD(location);
+
+	return newnode;
+}
+
 static WindowClause *
 _copyWindowClause(const WindowClause *from)
 {
@@ -2625,6 +2656,7 @@ _copyQuery(const Query *from)
 	COPY_NODE_FIELD(onConflict);
 	COPY_NODE_FIELD(returningList);
 	COPY_NODE_FIELD(groupClause);
+	COPY_NODE_FIELD(groupingSets);
 	COPY_NODE_FIELD(havingQual);
 	COPY_NODE_FIELD(windowClause);
 	COPY_NODE_FIELD(distinctClause);
@@ -4255,6 +4287,9 @@ copyObject(const void *from)
 		case T_Aggref:
 			retval = _copyAggref(from);
 			break;
+		case T_GroupingFunc:
+			retval = _copyGroupingFunc(from);
+			break;
 		case T_WindowFunc:
 			retval = _copyWindowFunc(from);
 			break;
@@ -4824,6 +4859,9 @@ copyObject(const void *from)
 		case T_SortGroupClause:
 			retval = _copySortGroupClause(from);
 			break;
+		case T_GroupingSet:
+			retval = _copyGroupingSet(from);
+			break;
 		case T_WindowClause:
 			retval = _copyWindowClause(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index c4b3615..8cc4566 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -208,6 +208,21 @@ _equalAggref(const Aggref *a, const Aggref *b)
 }
 
 static bool
+_equalGroupingFunc(const GroupingFunc *a, const GroupingFunc *b)
+{
+	COMPARE_NODE_FIELD(args);
+
+	/*
+	 * We must not compare the refs or cols field
+	 */
+
+	COMPARE_SCALAR_FIELD(agglevelsup);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowFunc(const WindowFunc *a, const WindowFunc *b)
 {
 	COMPARE_SCALAR_FIELD(winfnoid);
@@ -896,6 +911,7 @@ _equalQuery(const Query *a, const Query *b)
 	COMPARE_NODE_FIELD(onConflict);
 	COMPARE_NODE_FIELD(returningList);
 	COMPARE_NODE_FIELD(groupClause);
+	COMPARE_NODE_FIELD(groupingSets);
 	COMPARE_NODE_FIELD(havingQual);
 	COMPARE_NODE_FIELD(windowClause);
 	COMPARE_NODE_FIELD(distinctClause);
@@ -2426,6 +2442,16 @@ _equalSortGroupClause(const SortGroupClause *a, const SortGroupClause *b)
 }
 
 static bool
+_equalGroupingSet(const GroupingSet *a, const GroupingSet *b)
+{
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_NODE_FIELD(content);
+	COMPARE_LOCATION_FIELD(location);
+
+	return true;
+}
+
+static bool
 _equalWindowClause(const WindowClause *a, const WindowClause *b)
 {
 	COMPARE_STRING_FIELD(name);
@@ -2662,6 +2688,9 @@ equal(const void *a, const void *b)
 		case T_Aggref:
 			retval = _equalAggref(a, b);
 			break;
+		case T_GroupingFunc:
+			retval = _equalGroupingFunc(a, b);
+			break;
 		case T_WindowFunc:
 			retval = _equalWindowFunc(a, b);
 			break;
@@ -3218,6 +3247,9 @@ equal(const void *a, const void *b)
 		case T_SortGroupClause:
 			retval = _equalSortGroupClause(a, b);
 			break;
+		case T_GroupingSet:
+			retval = _equalGroupingSet(a, b);
+			break;
 		case T_WindowClause:
 			retval = _equalWindowClause(a, b);
 			break;
diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c
index 94cab47..a6737514 100644
--- a/src/backend/nodes/list.c
+++ b/src/backend/nodes/list.c
@@ -823,6 +823,32 @@ list_intersection(const List *list1, const List *list2)
 }
 
 /*
+ * As list_intersection but operates on lists of integers.
+ */
+List *
+list_intersection_int(const List *list1, const List *list2)
+{
+	List	   *result;
+	const ListCell *cell;
+
+	if (list1 == NIL || list2 == NIL)
+		return NIL;
+
+	Assert(IsIntegerList(list1));
+	Assert(IsIntegerList(list2));
+
+	result = NIL;
+	foreach(cell, list1)
+	{
+		if (list_member_int(list2, lfirst_int(cell)))
+			result = lappend_int(result, lfirst_int(cell));
+	}
+
+	check_list_invariants(result);
+	return result;
+}
+
+/*
  * Return a list that contains all the cells in list1 that are not in
  * list2. The returned list is freshly allocated via palloc(), but the
  * cells themselves point to the same objects as the cells of the
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 6fdf44d..a9b58eb 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -554,3 +554,18 @@ makeFuncCall(List *name, List *args, int location)
 	n->location = location;
 	return n;
 }
+
+/*
+ * makeGroupingSet
+ *
+ */
+GroupingSet *
+makeGroupingSet(GroupingSetKind kind, List *content, int location)
+{
+	GroupingSet	   *n = makeNode(GroupingSet);
+
+	n->kind = kind;
+	n->content = content;
+	n->location = location;
+	return n;
+}
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index eac0215..01332a2 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -54,6 +54,9 @@ exprType(const Node *expr)
 		case T_Aggref:
 			type = ((const Aggref *) expr)->aggtype;
 			break;
+		case T_GroupingFunc:
+			type = INT4OID;
+			break;
 		case T_WindowFunc:
 			type = ((const WindowFunc *) expr)->wintype;
 			break;
@@ -750,6 +753,9 @@ exprCollation(const Node *expr)
 		case T_Aggref:
 			coll = ((const Aggref *) expr)->aggcollid;
 			break;
+		case T_GroupingFunc:
+			coll = InvalidOid;
+			break;
 		case T_WindowFunc:
 			coll = ((const WindowFunc *) expr)->wincollid;
 			break;
@@ -986,6 +992,9 @@ exprSetCollation(Node *expr, Oid collation)
 		case T_Aggref:
 			((Aggref *) expr)->aggcollid = collation;
 			break;
+		case T_GroupingFunc:
+			Assert(!OidIsValid(collation));
+			break;
 		case T_WindowFunc:
 			((WindowFunc *) expr)->wincollid = collation;
 			break;
@@ -1202,6 +1211,9 @@ exprLocation(const Node *expr)
 			/* function name should always be the first thing */
 			loc = ((const Aggref *) expr)->location;
 			break;
+		case T_GroupingFunc:
+			loc = ((const GroupingFunc *) expr)->location;
+			break;
 		case T_WindowFunc:
 			/* function name should always be the first thing */
 			loc = ((const WindowFunc *) expr)->location;
@@ -1491,6 +1503,9 @@ exprLocation(const Node *expr)
 			/* XMLSERIALIZE keyword should always be the first thing */
 			loc = ((const XmlSerialize *) expr)->location;
 			break;
+		case T_GroupingSet:
+			loc = ((const GroupingSet *) expr)->location;
+			break;
 		case T_WithClause:
 			loc = ((const WithClause *) expr)->location;
 			break;
@@ -1685,6 +1700,15 @@ expression_tree_walker(Node *node,
 					return true;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc *grouping = (GroupingFunc *) node;
+
+				if (expression_tree_walker((Node *) grouping->args,
+										   walker, context))
+					return true;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *expr = (WindowFunc *) node;
@@ -2235,6 +2259,29 @@ expression_tree_mutator(Node *node,
 				return (Node *) newnode;
 			}
 			break;
+		case T_GroupingFunc:
+			{
+				GroupingFunc   *grouping = (GroupingFunc *) node;
+				GroupingFunc   *newnode;
+
+				FLATCOPY(newnode, grouping, GroupingFunc);
+				MUTATE(newnode->args, grouping->args, List *);
+
+				/*
+				 * We assume here that mutating the arguments does not change
+				 * the semantics, i.e. that the arguments are not mutated in a
+				 * way that makes them semantically different from their
+				 * previously matching expressions in the GROUP BY clause.
+				 *
+				 * If a mutator somehow wanted to do this, it would have to
+				 * handle the refs and cols lists itself as appropriate.
+				 */
+				newnode->refs = list_copy(grouping->refs);
+				newnode->cols = list_copy(grouping->cols);
+
+				return (Node *) newnode;
+			}
+			break;
 		case T_WindowFunc:
 			{
 				WindowFunc *wfunc = (WindowFunc *) node;
@@ -2946,6 +2993,8 @@ raw_expression_tree_walker(Node *node,
 			break;
 		case T_RangeVar:
 			return walker(((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return walker(((GroupingFunc *) node)->args, context);
 		case T_SubLink:
 			{
 				SubLink    *sublink = (SubLink *) node;
@@ -3271,6 +3320,8 @@ raw_expression_tree_walker(Node *node,
 				/* for now, constraints are ignored */
 			}
 			break;
+		case T_GroupingSet:
+			return walker(((GroupingSet *) node)->content, context);
 		case T_LockingClause:
 			return walker(((LockingClause *) node)->lockedRels, context);
 		case T_XmlSerialize:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index fe868b8..5033674 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -671,6 +671,9 @@ _outAgg(StringInfo str, const Agg *node)
 		appendStringInfo(str, " %u", node->grpOperators[i]);
 
 	WRITE_LONG_FIELD(numGroups);
+
+	WRITE_NODE_FIELD(groupingSets);
+	WRITE_NODE_FIELD(chain);
 }
 
 static void
@@ -996,6 +999,18 @@ _outAggref(StringInfo str, const Aggref *node)
 }
 
 static void
+_outGroupingFunc(StringInfo str, const GroupingFunc *node)
+{
+	WRITE_NODE_TYPE("GROUPINGFUNC");
+
+	WRITE_NODE_FIELD(args);
+	WRITE_NODE_FIELD(refs);
+	WRITE_NODE_FIELD(cols);
+	WRITE_INT_FIELD(agglevelsup);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowFunc(StringInfo str, const WindowFunc *node)
 {
 	WRITE_NODE_TYPE("WINDOWFUNC");
@@ -2356,6 +2371,7 @@ _outQuery(StringInfo str, const Query *node)
 	WRITE_NODE_FIELD(onConflict);
 	WRITE_NODE_FIELD(returningList);
 	WRITE_NODE_FIELD(groupClause);
+	WRITE_NODE_FIELD(groupingSets);
 	WRITE_NODE_FIELD(havingQual);
 	WRITE_NODE_FIELD(windowClause);
 	WRITE_NODE_FIELD(distinctClause);
@@ -2391,6 +2407,16 @@ _outSortGroupClause(StringInfo str, const SortGroupClause *node)
 }
 
 static void
+_outGroupingSet(StringInfo str, const GroupingSet *node)
+{
+	WRITE_NODE_TYPE("GROUPINGSET");
+
+	WRITE_ENUM_FIELD(kind, GroupingSetKind);
+	WRITE_NODE_FIELD(content);
+	WRITE_LOCATION_FIELD(location);
+}
+
+static void
 _outWindowClause(StringInfo str, const WindowClause *node)
 {
 	WRITE_NODE_TYPE("WINDOWCLAUSE");
@@ -3045,6 +3071,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_Aggref:
 				_outAggref(str, obj);
 				break;
+			case T_GroupingFunc:
+				_outGroupingFunc(str, obj);
+				break;
 			case T_WindowFunc:
 				_outWindowFunc(str, obj);
 				break;
@@ -3307,6 +3336,9 @@ _outNode(StringInfo str, const void *obj)
 			case T_SortGroupClause:
 				_outSortGroupClause(str, obj);
 				break;
+			case T_GroupingSet:
+				_outGroupingSet(str, obj);
+				break;
 			case T_WindowClause:
 				_outWindowClause(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 8136306..9d58414 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -217,6 +217,7 @@ _readQuery(void)
 	READ_NODE_FIELD(onConflict);
 	READ_NODE_FIELD(returningList);
 	READ_NODE_FIELD(groupClause);
+	READ_NODE_FIELD(groupingSets);
 	READ_NODE_FIELD(havingQual);
 	READ_NODE_FIELD(windowClause);
 	READ_NODE_FIELD(distinctClause);
@@ -293,6 +294,21 @@ _readSortGroupClause(void)
 }
 
 /*
+ * _readGroupingSet
+ */
+static GroupingSet *
+_readGroupingSet(void)
+{
+	READ_LOCALS(GroupingSet);
+
+	READ_ENUM_FIELD(kind, GroupingSetKind);
+	READ_NODE_FIELD(content);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowClause
  */
 static WindowClause *
@@ -512,6 +528,23 @@ _readAggref(void)
 }
 
 /*
+ * _readGroupingFunc
+ */
+static GroupingFunc *
+_readGroupingFunc(void)
+{
+	READ_LOCALS(GroupingFunc);
+
+	READ_NODE_FIELD(args);
+	READ_NODE_FIELD(refs);
+	READ_NODE_FIELD(cols);
+	READ_INT_FIELD(agglevelsup);
+	READ_LOCATION_FIELD(location);
+
+	READ_DONE();
+}
+
+/*
  * _readWindowFunc
  */
 static WindowFunc *
@@ -1345,6 +1378,8 @@ parseNodeString(void)
 		return_value = _readWithCheckOption();
 	else if (MATCH("SORTGROUPCLAUSE", 15))
 		return_value = _readSortGroupClause();
+	else if (MATCH("GROUPINGSET", 11))
+		return_value = _readGroupingSet();
 	else if (MATCH("WINDOWCLAUSE", 12))
 		return_value = _readWindowClause();
 	else if (MATCH("ROWMARKCLAUSE", 13))
@@ -1367,6 +1402,8 @@ parseNodeString(void)
 		return_value = _readParam();
 	else if (MATCH("AGGREF", 6))
 		return_value = _readAggref();
+	else if (MATCH("GROUPINGFUNC", 12))
+		return_value = _readGroupingFunc();
 	else if (MATCH("WINDOWFUNC", 10))
 		return_value = _readWindowFunc();
 	else if (MATCH("ARRAYREF", 8))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 9caca94..0ee0a70 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1241,6 +1241,7 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,
 	 */
 	if (parse->hasAggs ||
 		parse->groupClause ||
+		parse->groupingSets ||
 		parse->havingQual ||
 		parse->distinctClause ||
 		parse->sortClause ||
@@ -2101,7 +2102,7 @@ subquery_push_qual(Query *subquery, RangeTblEntry *rte, Index rti, Node *qual)
 		 * subquery uses grouping or aggregation, put it in HAVING (since the
 		 * qual really refers to the group-result rows).
 		 */
-		if (subquery->hasAggs || subquery->groupClause || subquery->havingQual)
+		if (subquery->hasAggs || subquery->groupClause || subquery->groupingSets || subquery->havingQual)
 			subquery->havingQual = make_and_qual(subquery->havingQual, qual);
 		else
 			subquery->jointree->quals =
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index fdd6bab..ea5e6f7 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -1954,7 +1954,8 @@ adjust_rowcount_for_semijoins(PlannerInfo *root,
 			nraw = approximate_joinrel_size(root, sjinfo->syn_righthand);
 			nunique = estimate_num_groups(root,
 										  sjinfo->semi_rhs_exprs,
-										  nraw);
+										  nraw,
+										  NULL);
 			if (rowcount > nunique)
 				rowcount = nunique;
 		}
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index 11d3933..a6c1753 100644
--- a/src/backend/optimizer/plan/analyzejoins.c
+++ b/src/backend/optimizer/plan/analyzejoins.c
@@ -581,6 +581,7 @@ query_supports_distinctness(Query *query)
 {
 	if (query->distinctClause != NIL ||
 		query->groupClause != NIL ||
+		query->groupingSets != NIL ||
 		query->hasAggs ||
 		query->havingQual ||
 		query->setOperations)
@@ -649,10 +650,10 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 	}
 
 	/*
-	 * Similarly, GROUP BY guarantees uniqueness if all the grouped columns
-	 * appear in colnos and operator semantics match.
+	 * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all
+	 * the grouped columns appear in colnos and operator semantics match.
 	 */
-	if (query->groupClause)
+	if (query->groupClause && !query->groupingSets)
 	{
 		foreach(l, query->groupClause)
 		{
@@ -668,6 +669,27 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
 		if (l == NULL)			/* had matches for all? */
 			return true;
 	}
+	else if (query->groupingSets)
+	{
+		/*
+		 * If we have grouping sets with expressions, we probably
+		 * don't have uniqueness and analysis would be hard. Punt.
+		 */
+		if (query->groupClause)
+			return false;
+
+		/*
+		 * If we have no groupClause (therefore no grouping expressions),
+		 * we might have one or many empty grouping sets. If there's just
+		 * one, then we're returning only one row and are certainly unique.
+		 * But otherwise, we know we're certainly not unique.
+		 */
+		if (list_length(query->groupingSets) == 1 &&
+			((GroupingSet *)linitial(query->groupingSets))->kind == GROUPING_SET_EMPTY)
+			return true;
+		else
+			return false;
+	}
 	else
 	{
 		/*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 783e34b..27ba1e1 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1030,6 +1030,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path)
 								 numGroupCols,
 								 groupColIdx,
 								 groupOperators,
+								 NIL,
 								 numGroups,
 								 subplan);
 	}
@@ -4423,6 +4424,7 @@ Agg *
 make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree)
 {
@@ -4452,10 +4454,12 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 	 * group otherwise.
 	 */
 	if (aggstrategy == AGG_PLAIN)
-		plan->plan_rows = 1;
+		plan->plan_rows = groupingSets ? list_length(groupingSets) : 1;
 	else
 		plan->plan_rows = numGroups;
 
+	node->groupingSets = groupingSets;
+
 	/*
 	 * We also need to account for the cost of evaluation of the qual (ie, the
 	 * HAVING clause) and the tlist.  Note that cost_qual_eval doesn't charge
@@ -4476,6 +4480,7 @@ make_agg(PlannerInfo *root, List *tlist, List *qual,
 
 	plan->qual = qual;
 	plan->targetlist = tlist;
+
 	plan->lefttree = lefttree;
 	plan->righttree = NULL;
 
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index af772a2..f0e9c05 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -96,7 +96,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
 	 * performs assorted processing related to these features between calling
 	 * preprocess_minmax_aggregates and optimize_minmax_aggregates.)
 	 */
-	if (parse->groupClause || parse->hasWindowFuncs)
+	if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs)
 		return;
 
 	/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 8de57c8..3a0e65d 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -16,13 +16,16 @@
 #include "postgres.h"
 
 #include <limits.h>
+#include <math.h>
 
 #include "access/htup_details.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "foreign/fdwapi.h"
 #include "miscadmin.h"
+#include "lib/bipartite_match.h"
 #include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
 #ifdef OPTIMIZER_DEBUG
 #include "nodes/print.h"
 #endif
@@ -38,6 +41,7 @@
 #include "optimizer/tlist.h"
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
+#include "parser/parse_agg.h"
 #include "rewrite/rewriteManip.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
@@ -66,6 +70,7 @@ typedef struct
 {
 	List	   *tlist;			/* preprocessed query targetlist */
 	List	   *activeWindows;	/* active windows, if any */
+	List	   *groupClause;	/* overrides parse->groupClause */
 } standard_qp_extra;
 
 /* Local functions */
@@ -78,7 +83,9 @@ static double preprocess_limit(PlannerInfo *root,
 				 double tuple_fraction,
 				 int64 *offset_est, int64 *count_est);
 static bool limit_needed(Query *parse);
-static void preprocess_groupclause(PlannerInfo *root);
+static List *preprocess_groupclause(PlannerInfo *root, List *force);
+static List *extract_rollup_sets(List *groupingSets);
+static List *reorder_grouping_sets(List *groupingSets, List *sortclause);
 static void standard_qp_callback(PlannerInfo *root, void *extra);
 static bool choose_hashed_grouping(PlannerInfo *root,
 					   double tuple_fraction, double limit_tuples,
@@ -114,7 +121,16 @@ static void get_column_info_for_window(PlannerInfo *root, WindowClause *wc,
 						   int *ordNumCols,
 						   AttrNumber **ordColIdx,
 						   Oid **ordOperators);
-
+static Plan *build_grouping_chain(PlannerInfo *root,
+						  Query	   *parse,
+						  List	   *tlist,
+						  bool		need_sort_for_grouping,
+						  List	   *rollup_groupclauses,
+						  List	   *rollup_lists,
+						  AttrNumber *groupColIdx,
+						  AggClauseCosts *agg_costs,
+						  long		numGroups,
+						  Plan	   *result_plan);
 
 /*****************************************************************************
  *
@@ -320,6 +336,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->append_rel_list = NIL;
 	root->rowMarks = NIL;
 	root->hasInheritedTarget = false;
+	root->grouping_map = NULL;
 
 	root->hasRecursion = hasRecursion;
 	if (hasRecursion)
@@ -546,7 +563,8 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 
 		if (contain_agg_clause(havingclause) ||
 			contain_volatile_functions(havingclause) ||
-			contain_subplans(havingclause))
+			contain_subplans(havingclause) ||
+			parse->groupingSets)
 		{
 			/* keep it in HAVING */
 			newHaving = lappend(newHaving, havingclause);
@@ -1235,11 +1253,6 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		List	   *sub_tlist;
 		AttrNumber *groupColIdx = NULL;
 		bool		need_tlist_eval = true;
-		standard_qp_extra qp_extra;
-		RelOptInfo *final_rel;
-		Path	   *cheapest_path;
-		Path	   *sorted_path;
-		Path	   *best_path;
 		long		numGroups = 0;
 		AggClauseCosts agg_costs;
 		int			numGroupCols;
@@ -1249,15 +1262,89 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		WindowFuncLists *wflists = NULL;
 		List	   *activeWindows = NIL;
 		OnConflictExpr *onconfl;
+		int			maxref = 0;
+		int		   *tleref_to_colnum_map;
+		List	   *rollup_lists = NIL;
+		List	   *rollup_groupclauses = NIL;
+		standard_qp_extra qp_extra;
+		RelOptInfo *final_rel;
+		Path	   *cheapest_path;
+		Path	   *sorted_path;
+		Path	   *best_path;
 
 		MemSet(&agg_costs, 0, sizeof(AggClauseCosts));
 
 		/* A recursive query should always have setOperations */
 		Assert(!root->hasRecursion);
 
-		/* Preprocess GROUP BY clause, if any */
+		/* Preprocess Grouping set, if any */
+		if (parse->groupingSets)
+			parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1);
+
 		if (parse->groupClause)
-			preprocess_groupclause(root);
+		{
+			ListCell   *lc;
+
+			foreach(lc, parse->groupClause)
+			{
+				SortGroupClause *gc = lfirst(lc);
+				if (gc->tleSortGroupRef > maxref)
+					maxref = gc->tleSortGroupRef;
+			}
+		}
+
+		tleref_to_colnum_map = palloc((maxref + 1) * sizeof(int));
+
+		if (parse->groupingSets)
+		{
+			ListCell   *lc;
+			ListCell   *lc2;
+			ListCell   *lc_set;
+			List	   *sets = extract_rollup_sets(parse->groupingSets);
+
+			foreach(lc_set, sets)
+			{
+				List   *current_sets = reorder_grouping_sets(lfirst(lc_set),
+													(list_length(sets) == 1
+													 ? parse->sortClause
+													 : NIL));
+				List   *groupclause = preprocess_groupclause(root, linitial(current_sets));
+				int		ref = 0;
+
+				/*
+				 * Now that we've pinned down an order for the groupClause for
+				 * this list of grouping sets, we need to remap the entries in
+				 * the grouping sets from sortgrouprefs to plain indices
+				 * (0-based) into the groupClause for this collection of
+				 * grouping sets.
+				 */
+
+				foreach(lc, groupclause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					tleref_to_colnum_map[gc->tleSortGroupRef] = ref++;
+				}
+
+				foreach(lc, current_sets)
+				{
+					foreach(lc2, (List *) lfirst(lc))
+					{
+						lfirst_int(lc2) = tleref_to_colnum_map[lfirst_int(lc2)];
+					}
+				}
+
+				rollup_lists = lcons(current_sets, rollup_lists);
+				rollup_groupclauses = lcons(groupclause, rollup_groupclauses);
+			}
+		}
+		else
+		{
+			/* Preprocess GROUP BY clause, if any */
+			if (parse->groupClause)
+				parse->groupClause = preprocess_groupclause(root, NIL);
+			rollup_groupclauses = list_make1(parse->groupClause);
+		}
+
 		numGroupCols = list_length(parse->groupClause);
 
 		/* Preprocess targetlist */
@@ -1337,6 +1424,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * grouping/aggregation operations.
 		 */
 		if (parse->groupClause ||
+			parse->groupingSets ||
 			parse->distinctClause ||
 			parse->hasAggs ||
 			parse->hasWindowFuncs ||
@@ -1348,6 +1436,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		/* Set up data needed by standard_qp_callback */
 		qp_extra.tlist = tlist;
 		qp_extra.activeWindows = activeWindows;
+		qp_extra.groupClause = llast(rollup_groupclauses);
 
 		/*
 		 * Generate the best unsorted and presorted paths for this Query (but
@@ -1380,9 +1469,39 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			List	   *groupExprs;
 
-			groupExprs = get_sortgrouplist_exprs(parse->groupClause,
-												 parse->targetList);
-			dNumGroups = estimate_num_groups(root, groupExprs, path_rows);
+			if (parse->groupingSets)
+			{
+				ListCell   *lc,
+						   *lc2;
+
+				dNumGroups = 0;
+
+				forboth(lc, rollup_groupclauses, lc2, rollup_lists)
+				{
+					ListCell   *lc3;
+
+					groupExprs = get_sortgrouplist_exprs(lfirst(lc),
+														 parse->targetList);
+
+					foreach(lc3, lfirst(lc2))
+					{
+						List   *gset = lfirst(lc3);
+
+						dNumGroups += estimate_num_groups(root,
+														  groupExprs,
+														  path_rows,
+														  &gset);
+					}
+				}
+			}
+			else
+			{
+				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
+													 parse->targetList);
+
+				dNumGroups = estimate_num_groups(root, groupExprs, path_rows,
+												 NULL);
+			}
 
 			/*
 			 * In GROUP BY mode, an absolute LIMIT is relative to the number
@@ -1394,6 +1513,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 				tuple_fraction /= dNumGroups;
 
 			/*
+			 * If there's more than one grouping set, we'll have to sort the
+			 * entire input.
+			 */
+			if (list_length(rollup_lists) > 1)
+				tuple_fraction = 0.0;
+
+			/*
 			 * If both GROUP BY and ORDER BY are specified, we will need two
 			 * levels of sort --- and, therefore, certainly need to read all
 			 * the tuples --- unless ORDER BY is a subset of GROUP BY.
@@ -1408,14 +1534,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 									   root->group_pathkeys))
 				tuple_fraction = 0.0;
 		}
-		else if (parse->hasAggs || root->hasHavingQual)
+		else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets)
 		{
 			/*
 			 * Ungrouped aggregate will certainly want to read all the tuples,
-			 * and it will deliver a single result row (so leave dNumGroups
-			 * set to 1).
+			 * and it will deliver a single result row per grouping set (or 1
+			 * if no grouping sets were explicitly given, in which case leave
+			 * dNumGroups as-is)
 			 */
 			tuple_fraction = 0.0;
+			if (parse->groupingSets)
+				dNumGroups = list_length(parse->groupingSets);
 		}
 		else if (parse->distinctClause)
 		{
@@ -1430,7 +1559,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			distinctExprs = get_sortgrouplist_exprs(parse->distinctClause,
 													parse->targetList);
-			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows);
+			dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL);
 
 			/*
 			 * Adjust tuple_fraction the same way as for GROUP BY, too.
@@ -1513,13 +1642,24 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		{
 			/*
 			 * If grouping, decide whether to use sorted or hashed grouping.
+			 * If grouping sets are present, we can currently do only sorted
+			 * grouping.
 			 */
-			use_hashed_grouping =
-				choose_hashed_grouping(root,
-									   tuple_fraction, limit_tuples,
-									   path_rows, path_width,
-									   cheapest_path, sorted_path,
-									   dNumGroups, &agg_costs);
+
+			if (parse->groupingSets)
+			{
+				use_hashed_grouping = false;
+			}
+			else
+			{
+				use_hashed_grouping =
+					choose_hashed_grouping(root,
+										   tuple_fraction, limit_tuples,
+										   path_rows, path_width,
+										   cheapest_path, sorted_path,
+										   dNumGroups, &agg_costs);
+			}
+
 			/* Also convert # groups to long int --- but 'ware overflow! */
 			numGroups = (long) Min(dNumGroups, (double) LONG_MAX);
 		}
@@ -1585,7 +1725,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 
 			/* Detect if we'll need an explicit sort for grouping */
 			if (parse->groupClause && !use_hashed_grouping &&
-			  !pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
+				!pathkeys_contained_in(root->group_pathkeys, current_pathkeys))
 			{
 				need_sort_for_grouping = true;
 
@@ -1645,6 +1785,27 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 			}
 
 			/*
+			 * groupColIdx is now cast in stone, so record a mapping from
+			 * tleSortGroupRef to column index. setrefs.c needs this to
+			 * finalize GROUPING() operations.
+			 */
+
+			if (parse->groupingSets)
+			{
+				AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1));
+				ListCell   *lc;
+				int			i = 0;
+
+				foreach(lc, parse->groupClause)
+				{
+					SortGroupClause *gc = lfirst(lc);
+					grouping_map[gc->tleSortGroupRef] = groupColIdx[i++];
+				}
+
+				root->grouping_map = grouping_map;
+			}
+
+			/*
 			 * Insert AGG or GROUP node if needed, plus an explicit sort step
 			 * if necessary.
 			 *
@@ -1660,52 +1821,43 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												&agg_costs,
 												numGroupCols,
 												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
+												extract_grouping_ops(parse->groupClause),
+												NIL,
 												numGroups,
 												result_plan);
 				/* Hashed aggregation produces randomly-ordered results */
 				current_pathkeys = NIL;
 			}
-			else if (parse->hasAggs)
+			else if (parse->hasAggs || (parse->groupingSets && parse->groupClause))
 			{
-				/* Plain aggregate plan --- sort if needed */
-				AggStrategy aggstrategy;
-
-				if (parse->groupClause)
-				{
-					if (need_sort_for_grouping)
-					{
-						result_plan = (Plan *)
-							make_sort_from_groupcols(root,
-													 parse->groupClause,
-													 groupColIdx,
-													 result_plan);
-						current_pathkeys = root->group_pathkeys;
-					}
-					aggstrategy = AGG_SORTED;
-
-					/*
-					 * The AGG node will not change the sort ordering of its
-					 * groups, so current_pathkeys describes the result too.
-					 */
-				}
+				/*
+				 * Output is in sorted order by group_pathkeys if, and only if,
+				 * there is a single rollup operation on a non-empty list of
+				 * grouping expressions.
+				 */
+				if (list_length(rollup_groupclauses) == 1
+					&& list_length(linitial(rollup_groupclauses)) > 0)
+					current_pathkeys = root->group_pathkeys;
 				else
-				{
-					aggstrategy = AGG_PLAIN;
-					/* Result will be only one row anyway; no sort order */
 					current_pathkeys = NIL;
-				}
 
-				result_plan = (Plan *) make_agg(root,
-												tlist,
-												(List *) parse->havingQual,
-												aggstrategy,
-												&agg_costs,
-												numGroupCols,
-												groupColIdx,
-									extract_grouping_ops(parse->groupClause),
-												numGroups,
-												result_plan);
+				result_plan = build_grouping_chain(root,
+												   parse,
+												   tlist,
+												   need_sort_for_grouping,
+												   rollup_groupclauses,
+												   rollup_lists,
+												   groupColIdx,
+												   &agg_costs,
+												   numGroups,
+												   result_plan);
+
+				/*
+				 * these are destroyed by build_grouping_chain, so make sure we
+				 * don't try and touch them again
+				 */
+				rollup_groupclauses = NIL;
+				rollup_lists = NIL;
 			}
 			else if (parse->groupClause)
 			{
@@ -1736,24 +1888,45 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 												  result_plan);
 				/* The Group node won't change sort ordering */
 			}
-			else if (root->hasHavingQual)
+			else if (root->hasHavingQual || parse->groupingSets)
 			{
+				int		nrows = list_length(parse->groupingSets);
+
 				/*
-				 * No aggregates, and no GROUP BY, but we have a HAVING qual.
+				 * No aggregates, and no GROUP BY, but we have a HAVING qual or
+				 * grouping sets (which by elimination of cases above must
+				 * consist solely of empty grouping sets, since otherwise
+				 * groupClause will be non-empty).
+				 *
 				 * This is a degenerate case in which we are supposed to emit
-				 * either 0 or 1 row depending on whether HAVING succeeds.
-				 * Furthermore, there cannot be any variables in either HAVING
-				 * or the targetlist, so we actually do not need the FROM
-				 * table at all!  We can just throw away the plan-so-far and
-				 * generate a Result node.  This is a sufficiently unusual
-				 * corner case that it's not worth contorting the structure of
-				 * this routine to avoid having to generate the plan in the
-				 * first place.
+				 * either 0 or 1 row for each grouping set depending on whether
+				 * HAVING succeeds.  Furthermore, there cannot be any variables
+				 * in either HAVING or the targetlist, so we actually do not
+				 * need the FROM table at all!  We can just throw away the
+				 * plan-so-far and generate a Result node.  This is a
+				 * sufficiently unusual corner case that it's not worth
+				 * contorting the structure of this routine to avoid having to
+				 * generate the plan in the first place.
 				 */
 				result_plan = (Plan *) make_result(root,
 												   tlist,
 												   parse->havingQual,
 												   NULL);
+
+				/*
+				 * Doesn't seem worthwhile writing code to cons up a
+				 * generate_series or a values scan to emit multiple rows.
+				 * Instead just clone the result in an Append.
+				 */
+				if (nrows > 1)
+				{
+					List   *plans = list_make1(result_plan);
+
+					while (--nrows > 0)
+						plans = lappend(plans, copyObject(result_plan));
+
+					result_plan = (Plan *) make_append(plans, tlist);
+				}
 			}
 		}						/* end of non-minmax-aggregate case */
 
@@ -1919,7 +2092,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		 * result was already mostly unique).  If not, use the number of
 		 * distinct-groups calculated previously.
 		 */
-		if (parse->groupClause || root->hasHavingQual || parse->hasAggs)
+		if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs)
 			dNumDistinctRows = result_plan->plan_rows;
 		else
 			dNumDistinctRows = dNumGroups;
@@ -1960,6 +2133,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 								 extract_grouping_cols(parse->distinctClause,
 													result_plan->targetlist),
 								 extract_grouping_ops(parse->distinctClause),
+											NIL,
 											numDistinctRows,
 											result_plan);
 			/* Hashed aggregation produces randomly-ordered results */
@@ -2069,6 +2243,198 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 	return result_plan;
 }
 
+
+/*
+ * Given a groupclause for a collection of grouping sets, produce the
+ * corresponding groupColIdx.
+ *
+ * root->grouping_map maps the tleSortGroupRef to the actual column position in
+ * the input tuple. So we get the ref from the entries in the groupclause and
+ * look them up there.
+ */
+static AttrNumber *
+remap_groupColIdx(PlannerInfo *root, List *groupClause)
+{
+	AttrNumber *grouping_map = root->grouping_map;
+	AttrNumber *new_grpColIdx;
+	ListCell   *lc;
+	int			i;
+
+	Assert(grouping_map);
+
+	new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(groupClause));
+
+	i = 0;
+	foreach(lc, groupClause)
+	{
+		SortGroupClause *clause = lfirst(lc);
+		new_grpColIdx[i++] = grouping_map[clause->tleSortGroupRef];
+	}
+
+	return new_grpColIdx;
+}
+
+/*
+ * Build Agg and Sort nodes to implement sorted grouping with one or more
+ * grouping sets. (A plain GROUP BY or just the presence of aggregates counts
+ * for this purpose as a single grouping set; the calling code is responsible
+ * for providing a non-empty rollup_groupclauses list for such cases, though
+ * rollup_lists may be null.)
+ *
+ * The last entry in rollup_groupclauses (which is the one the input is sorted
+ * on, if at all) is the one used for the returned Agg node. Any additional
+ * rollups are attached, with corresponding sort info, to subsidiary Agg and
+ * Sort nodes attached to the side of the real Agg node; these nodes don't
+ * participate in the plan directly, but they are both a convenient way to
+ * represent the required data and a convenient way to account for the costs
+ * of execution.
+ *
+ * rollup_groupclauses and rollup_lists are destroyed by this function.
+ */
+static Plan *
+build_grouping_chain(PlannerInfo *root,
+					 Query	   *parse,
+					 List	   *tlist,
+					 bool		need_sort_for_grouping,
+					 List	   *rollup_groupclauses,
+					 List	   *rollup_lists,
+					 AttrNumber *groupColIdx,
+					 AggClauseCosts *agg_costs,
+					 long		numGroups,
+					 Plan	   *result_plan)
+{
+	AttrNumber *top_grpColIdx = groupColIdx;
+	List	   *chain = NIL;
+
+	/*
+	 * Prepare the grpColIdx for the real Agg node first, because we may need
+	 * it for sorting
+	 */
+	if (list_length(rollup_groupclauses) > 1)
+	{
+		Assert(rollup_lists && llast(rollup_lists));
+
+		top_grpColIdx =
+			remap_groupColIdx(root, llast(rollup_groupclauses));
+	}
+
+	/*
+	 * If we need a Sort operation on the input, generate that.
+	 */
+	if (need_sort_for_grouping)
+	{
+		result_plan = (Plan *)
+			make_sort_from_groupcols(root,
+									 llast(rollup_groupclauses),
+									 top_grpColIdx,
+									 result_plan);
+	}
+
+	/*
+	 * Generate the side nodes that describe the other sort and group
+	 * operations besides the top one.
+	 */
+	while (list_length(rollup_groupclauses) > 1)
+	{
+		List	   *groupClause = linitial(rollup_groupclauses);
+		List	   *gsets = linitial(rollup_lists);
+		AttrNumber *new_grpColIdx;
+		Plan	   *sort_plan;
+		Plan	   *agg_plan;
+
+		Assert(groupClause);
+		Assert(gsets);
+
+		new_grpColIdx = remap_groupColIdx(root, groupClause);
+
+		sort_plan = (Plan *)
+			make_sort_from_groupcols(root,
+									 groupClause,
+									 new_grpColIdx,
+									 result_plan);
+
+		/*
+		 * sort_plan includes the cost of result_plan over again, which is not
+		 * what we want (since it's not actually running that plan). So correct
+		 * the cost figures.
+		 */
+
+		sort_plan->startup_cost -= result_plan->total_cost;
+		sort_plan->total_cost -= result_plan->total_cost;
+
+		agg_plan = (Plan *) make_agg(root,
+									 tlist,
+									 (List *) parse->havingQual,
+									 AGG_SORTED,
+									 agg_costs,
+									 list_length(linitial(gsets)),
+									 new_grpColIdx,
+									 extract_grouping_ops(groupClause),
+									 gsets,
+									 numGroups,
+									 sort_plan);
+
+		sort_plan->lefttree = NULL;
+
+		chain = lappend(chain, agg_plan);
+
+		if (rollup_lists)
+			rollup_lists = list_delete_first(rollup_lists);
+
+		rollup_groupclauses = list_delete_first(rollup_groupclauses);
+	}
+
+	/*
+	 * Now make the final Agg node
+	 */
+	{
+		List	   *groupClause = linitial(rollup_groupclauses);
+		List	   *gsets = rollup_lists ? linitial(rollup_lists) : NIL;
+		int			numGroupCols;
+		ListCell   *lc;
+
+		if (gsets)
+			numGroupCols = list_length(linitial(gsets));
+		else
+			numGroupCols = list_length(parse->groupClause);
+
+		result_plan = (Plan *) make_agg(root,
+										tlist,
+										(List *) parse->havingQual,
+										(numGroupCols > 0) ? AGG_SORTED : AGG_PLAIN,
+										agg_costs,
+										numGroupCols,
+										top_grpColIdx,
+										extract_grouping_ops(groupClause),
+										gsets,
+										numGroups,
+										result_plan);
+
+		((Agg *) result_plan)->chain = chain;
+
+		/*
+		 * Add the additional costs. But only the total costs count, since the
+		 * additional sorts aren't run on startup.
+		 */
+		foreach(lc, chain)
+		{
+			Plan   *subplan = lfirst(lc);
+
+			result_plan->total_cost += subplan->total_cost;
+
+			/*
+			 * Nuke stuff we don't need to avoid bloating debug output.
+			 */
+
+			subplan->targetlist = NIL;
+			subplan->qual = NIL;
+			subplan->lefttree->targetlist = NIL;
+		}
+	}
+
+	return result_plan;
+}
+
 /*
  * add_tlist_costs_to_plan
  *
@@ -2629,19 +2995,38 @@ limit_needed(Query *parse)
  *
  * Note: we need no comparable processing of the distinctClause because
  * the parser already enforced that that matches ORDER BY.
+ *
+ * For grouping sets, the order of items is instead forced to agree with that
+ * of the grouping set (and items not in the grouping set are skipped). The
+ * work of sorting the order of grouping set elements to match the ORDER BY if
+ * possible is done elsewhere.
  */
-static void
-preprocess_groupclause(PlannerInfo *root)
+static List *
+preprocess_groupclause(PlannerInfo *root, List *force)
 {
 	Query	   *parse = root->parse;
-	List	   *new_groupclause;
+	List	   *new_groupclause = NIL;
 	bool		partial_match;
 	ListCell   *sl;
 	ListCell   *gl;
 
+	/* For grouping sets, we need to force the ordering */
+	if (force)
+	{
+		foreach(sl, force)
+		{
+			Index ref = lfirst_int(sl);
+			SortGroupClause *cl = get_sortgroupref_clause(ref, parse->groupClause);
+
+			new_groupclause = lappend(new_groupclause, cl);
+		}
+
+		return new_groupclause;
+	}
+
 	/* If no ORDER BY, nothing useful to do here */
 	if (parse->sortClause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Scan the ORDER BY clause and construct a list of matching GROUP BY
@@ -2649,7 +3034,6 @@ preprocess_groupclause(PlannerInfo *root)
 	 *
 	 * This code assumes that the sortClause contains no duplicate items.
 	 */
-	new_groupclause = NIL;
 	foreach(sl, parse->sortClause)
 	{
 		SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
@@ -2673,7 +3057,7 @@ preprocess_groupclause(PlannerInfo *root)
 
 	/* If no match at all, no point in reordering GROUP BY */
 	if (new_groupclause == NIL)
-		return;
+		return parse->groupClause;
 
 	/*
 	 * Add any remaining GROUP BY items to the new list, but only if we were
@@ -2690,15 +3074,290 @@ preprocess_groupclause(PlannerInfo *root)
 		if (list_member_ptr(new_groupclause, gc))
 			continue;			/* it matched an ORDER BY item */
 		if (partial_match)
-			return;				/* give up, no common sort possible */
+			return parse->groupClause;	/* give up, no common sort possible */
 		if (!OidIsValid(gc->sortop))
-			return;				/* give up, GROUP BY can't be sorted */
+			return parse->groupClause;	/* give up, GROUP BY can't be sorted */
 		new_groupclause = lappend(new_groupclause, gc);
 	}
 
 	/* Success --- install the rearranged GROUP BY list */
 	Assert(list_length(parse->groupClause) == list_length(new_groupclause));
-	parse->groupClause = new_groupclause;
+	return new_groupclause;
+}
+
+/*
+ * Extract lists of grouping sets that can be implemented using a single
+ * rollup-type aggregate pass each. Returns a list of lists of grouping sets.
+ *
+ * Input must be sorted with smallest sets first. Result has each sublist
+ * sorted with smallest sets first.
+ *
+ * We want to produce the absolute minimum possible number of lists here to
+ * avoid excess sorts. Fortunately, there is an algorithm for this; the problem
+ * of finding the minimal partition of a partially-ordered set into chains
+ * (which is what we need, taking the list of grouping sets as a poset ordered
+ * by set inclusion) can be mapped to the problem of finding the maximum
+ * cardinality matching on a bipartite graph, which is solvable in polynomial
+ * time with a worst case of no worse than O(n^2.5) and usually much
+ * better. Since our N is at most 4096, we don't need to consider fallbacks to
+ * heuristic or approximate methods.  (Planning time for a 12-d cube is under
+ * half a second on my modest system even with optimization off and assertions
+ * on.)
+ */
+static List *
+extract_rollup_sets(List *groupingSets)
+{
+	int			num_sets_raw = list_length(groupingSets);
+	int			num_empty = 0;
+	int			num_sets = 0;		/* distinct sets */
+	int			num_chains = 0;
+	List	   *result = NIL;
+	List	  **results;
+	List	  **orig_sets;
+	Bitmapset **set_masks;
+	int		   *chains;
+	short	  **adjacency;
+	short	   *adjacency_buf;
+	BipartiteMatchState *state;
+	int			i;
+	int			j;
+	int			j_size;
+	ListCell   *lc1 = list_head(groupingSets);
+	ListCell   *lc;
+
+	/*
+	 * Start by stripping out empty sets.  The algorithm doesn't require this,
+	 * but the planner currently needs all empty sets to be returned in the
+	 * first list, so we strip them here and add them back after.
+	 */
+	while (lc1 && lfirst(lc1) == NIL)
+	{
+		++num_empty;
+		lc1 = lnext(lc1);
+	}
+
+	/* bail out now if it turns out that all we had were empty sets. */
+	if (!lc1)
+		return list_make1(groupingSets);
+
+	/*
+	 * We don't strictly need to remove duplicate sets here, but if we
+	 * don't, they tend to become scattered through the result, which is
+	 * a bit confusing (and irritating if we ever decide to optimize them
+	 * out). So we remove them here and add them back after.
+	 *
+	 * For each non-duplicate set, we fill in the following:
+	 *
+	 * orig_sets[i] = list of the original set lists
+	 * set_masks[i] = bitmapset for testing inclusion
+	 * adjacency[i] = array [n, v1, v2, ... vn] of adjacency indices
+	 *
+	 * chains[i] will be the result group this set is assigned to.
+	 *
+	 * We index all of these from 1 rather than 0 because it is convenient
+	 * to leave 0 free for the NIL node in the graph algorithm.
+	 */
+	orig_sets = palloc0((num_sets_raw + 1) * sizeof(List*));
+	set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
+	adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
+	adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+
+	j_size = 0;
+	j = 0;
+	i = 1;
+
+	for_each_cell(lc, lc1)
+	{
+		List	   *candidate = lfirst(lc);
+		Bitmapset  *candidate_set = NULL;
+		ListCell   *lc2;
+		int			dup_of = 0;
+
+		foreach(lc2, candidate)
+		{
+			candidate_set = bms_add_member(candidate_set, lfirst_int(lc2));
+		}
+
+		/* we can only be a dup if we're the same length as a previous set */
+		if (j_size == list_length(candidate))
+		{
+			int		k;
+			for (k = j; k < i; ++k)
+			{
+				if (bms_equal(set_masks[k], candidate_set))
+				{
+					dup_of = k;
+					break;
+				}
+			}
+		}
+		else if (j_size < list_length(candidate))
+		{
+			j_size = list_length(candidate);
+			j = i;
+		}
+
+		if (dup_of > 0)
+		{
+			orig_sets[dup_of] = lappend(orig_sets[dup_of], candidate);
+			bms_free(candidate_set);
+		}
+		else
+		{
+			int		k;
+			int		n_adj = 0;
+
+			orig_sets[i] = list_make1(candidate);
+			set_masks[i] = candidate_set;
+
+			/* fill in adjacency list; no need to compare equal-size sets */
+
+			for (k = j - 1; k > 0; --k)
+			{
+				if (bms_is_subset(set_masks[k], candidate_set))
+					adjacency_buf[++n_adj] = k;
+			}
+
+			if (n_adj > 0)
+			{
+				adjacency_buf[0] = n_adj;
+				adjacency[i] = palloc((n_adj + 1) * sizeof(short));
+				memcpy(adjacency[i], adjacency_buf, (n_adj + 1) * sizeof(short));
+			}
+			else
+				adjacency[i] = NULL;
+
+			++i;
+		}
+	}
+
+	num_sets = i - 1;
+
+	/*
+	 * Apply the graph matching algorithm to do the work.
+	 */
+	state = BipartiteMatch(num_sets, num_sets, adjacency);
+
+	/*
+	 * Now, the state->pair* fields have the info we need to assign sets to
+	 * chains. Two sets (u,v) belong to the same chain if pair_uv[u] = v or
+	 * pair_vu[v] = u (both will be true, but we check both so that we can do
+	 * it in one pass)
+	 */
+	chains = palloc0((num_sets + 1) * sizeof(int));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int u = state->pair_vu[i];
+		int v = state->pair_uv[i];
+
+		if (u > 0 && u < i)
+			chains[i] = chains[u];
+		else if (v > 0 && v < i)
+			chains[i] = chains[v];
+		else
+			chains[i] = ++num_chains;
+	}
+
+	/* build result lists. */
+	results = palloc0((num_chains + 1) * sizeof(List*));
+
+	for (i = 1; i <= num_sets; ++i)
+	{
+		int c = chains[i];
+
+		Assert(c > 0);
+
+		results[c] = list_concat(results[c], orig_sets[i]);
+	}
+
+	/* push any empty sets back on the first list. */
+	while (num_empty-- > 0)
+		results[1] = lcons(NIL, results[1]);
+
+	/* make result list */
+	for (i = 1; i <= num_chains; ++i)
+		result = lappend(result, results[i]);
+
+	/*
+	 * Free all the things.
+	 *
+	 * (This is over-fussy for small sets but for large sets we could have
+	 * tied up a nontrivial amount of memory.)
+	 */
+	BipartiteMatchFree(state);
+	pfree(results);
+	pfree(chains);
+	for (i = 1; i <= num_sets; ++i)
+		if (adjacency[i])
+			pfree(adjacency[i]);
+	pfree(adjacency);
+	pfree(adjacency_buf);
+	pfree(orig_sets);
+	for (i = 1; i <= num_sets; ++i)
+		bms_free(set_masks[i]);
+	pfree(set_masks);
+
+	return result;
+}
+
+/*
+ * Reorder the elements of a list of grouping sets such that they have correct
+ * prefix relationships.
+ *
+ * The input must be ordered with smallest sets first; the result is returned
+ * with largest sets first.
+ *
+ * If we're passed in a sortclause, we follow its order of columns to the
+ * extent possible, to minimize the chance that we add unnecessary sorts.
+ * (We're trying here to ensure that GROUPING SETS ((a,b,c),(c)) ORDER BY c,b,a
+ * gets implemented in one pass.)
+ */
+static List *
+reorder_grouping_sets(List *groupingsets, List *sortclause)
+{
+	ListCell   *lc;
+	ListCell   *lc2;
+	List	   *previous = NIL;
+	List	   *result = NIL;
+
+	foreach(lc, groupingsets)
+	{
+		List   *candidate = lfirst(lc);
+		List   *new_elems = list_difference_int(candidate, previous);
+
+		if (list_length(new_elems) > 0)
+		{
+			while (list_length(sortclause) > list_length(previous))
+			{
+				SortGroupClause *sc = list_nth(sortclause, list_length(previous));
+				int ref = sc->tleSortGroupRef;
+				if (list_member_int(new_elems, ref))
+				{
+					previous = lappend_int(previous, ref);
+					new_elems = list_delete_int(new_elems, ref);
+				}
+				else
+				{
+					/* diverged from the sortclause; give up on it */
+					sortclause = NIL;
+					break;
+				}
+			}
+
+			foreach(lc2, new_elems)
+			{
+				previous = lappend_int(previous, lfirst_int(lc2));
+			}
+		}
+
+		result = lcons(list_copy(previous), result);
+		list_free(new_elems);
+	}
+
+	list_free(previous);
+
+	return result;
 }
 
 /*
@@ -2717,11 +3376,11 @@ standard_qp_callback(PlannerInfo *root, void *extra)
 	 * sortClause is certainly sort-able, but GROUP BY and DISTINCT might not
 	 * be, in which case we just leave their pathkeys empty.
 	 */
-	if (parse->groupClause &&
-		grouping_is_sortable(parse->groupClause))
+	if (qp_extra->groupClause &&
+		grouping_is_sortable(qp_extra->groupClause))
 		root->group_pathkeys =
 			make_pathkeys_for_sortclauses(root,
-										  parse->groupClause,
+										  qp_extra->groupClause,
 										  tlist);
 	else
 		root->group_pathkeys = NIL;
@@ -3146,7 +3805,7 @@ make_subplanTargetList(PlannerInfo *root,
 	 * If we're not grouping or aggregating, there's nothing to do here;
 	 * query_planner should receive the unmodified target list.
 	 */
-	if (!parse->hasAggs && !parse->groupClause && !root->hasHavingQual &&
+	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
 		*need_tlist_eval = true;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 517409d..ba3ff57 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -140,7 +140,6 @@ static bool fix_opfuncids_walker(Node *node, void *context);
 static bool extract_query_dependencies_walker(Node *node,
 								  PlannerInfo *context);
 
-
 /*****************************************************************************
  *
  *		SUBPLAN REFERENCES
@@ -645,6 +644,8 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			}
 			break;
 		case T_Agg:
+			set_upper_references(root, plan, rtoffset);
+			break;
 		case T_Group:
 			set_upper_references(root, plan, rtoffset);
 			break;
@@ -1218,6 +1219,7 @@ copyVar(Var *var)
  * We must look up operator opcode info for OpExpr and related nodes,
  * add OIDs from regclass Const nodes into root->glob->relationOids, and
  * add catalog TIDs for user-defined functions into root->glob->invalItems.
+ * We also fill in column index lists for GROUPING() expressions.
  *
  * We assume it's okay to update opcode info in-place.  So this could possibly
  * scribble on the planner's input data structures, but it's OK.
@@ -1281,6 +1283,31 @@ fix_expr_common(PlannerInfo *root, Node *node)
 				lappend_oid(root->glob->relationOids,
 							DatumGetObjectId(con->constvalue));
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *g = (GroupingFunc *) node;
+		AttrNumber *grouping_map = root->grouping_map;
+
+		/* If there are no grouping sets, we don't need this. */
+
+		Assert(grouping_map || g->cols == NIL);
+
+		if (grouping_map)
+		{
+			ListCell   *lc;
+			List	   *cols = NIL;
+
+			foreach(lc, g->refs)
+			{
+				cols = lappend_int(cols, grouping_map[lfirst_int(lc)]);
+			}
+
+			Assert(!g->cols || equal(cols, g->cols));
+
+			if (!g->cols)
+				g->cols = cols;
+		}
+	}
 }
 
 /*
@@ -2175,6 +2202,7 @@ set_returning_clause_references(PlannerInfo *root,
 	return rlist;
 }
 
+
 /*****************************************************************************
  *					OPERATOR REGPROC LOOKUP
  *****************************************************************************/
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index afccee5..00da91a 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -336,6 +336,48 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg)
 }
 
 /*
+ * Generate a Param node to replace the given GroupingFunc expression which is
+ * expected to have agglevelsup > 0 (ie, it is not local).
+ */
+static Param *
+replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp)
+{
+	Param	   *retval;
+	PlannerParamItem *pitem;
+	Index		levelsup;
+
+	Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level);
+
+	/* Find the query level the GroupingFunc belongs to */
+	for (levelsup = grp->agglevelsup; levelsup > 0; levelsup--)
+		root = root->parent_root;
+
+	/*
+	 * It does not seem worthwhile to try to match duplicate outer aggs. Just
+	 * make a new slot every time.
+	 */
+	grp = (GroupingFunc *) copyObject(grp);
+	IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0);
+	Assert(grp->agglevelsup == 0);
+
+	pitem = makeNode(PlannerParamItem);
+	pitem->item = (Node *) grp;
+	pitem->paramId = root->glob->nParamExec++;
+
+	root->plan_params = lappend(root->plan_params, pitem);
+
+	retval = makeNode(Param);
+	retval->paramkind = PARAM_EXEC;
+	retval->paramid = pitem->paramId;
+	retval->paramtype = exprType((Node *) grp);
+	retval->paramtypmod = -1;
+	retval->paramcollid = InvalidOid;
+	retval->location = grp->location;
+
+	return retval;
+}
+
+/*
  * Generate a new Param node that will not conflict with any other.
  *
  * This is used to create Params representing subplan outputs.
@@ -1494,14 +1536,16 @@ simplify_EXISTS_query(PlannerInfo *root, Query *query)
 {
 	/*
 	 * We don't try to simplify at all if the query uses set operations,
-	 * aggregates, modifying CTEs, HAVING, OFFSET, or FOR UPDATE/SHARE; none
-	 * of these seem likely in normal usage and their possible effects are
-	 * complex.  (Note: we could ignore an "OFFSET 0" clause, but that
-	 * traditionally is used as an optimization fence, so we don't.)
+	 * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
+	 * UPDATE/SHARE; none of these seem likely in normal usage and their
+	 * possible effects are complex.  (Note: we could ignore an "OFFSET 0"
+	 * clause, but that traditionally is used as an optimization fence, so we
+	 * don't.)
 	 */
 	if (query->commandType != CMD_SELECT ||
 		query->setOperations ||
 		query->hasAggs ||
+		query->groupingSets ||
 		query->hasWindowFuncs ||
 		query->hasModifyingCTE ||
 		query->havingQual ||
@@ -1851,6 +1895,11 @@ replace_correlation_vars_mutator(Node *node, PlannerInfo *root)
 		if (((Aggref *) node)->agglevelsup > 0)
 			return (Node *) replace_outer_agg(root, (Aggref *) node);
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup > 0)
+			return (Node *) replace_outer_grouping(root, (GroupingFunc *) node);
+	}
 	return expression_tree_mutator(node,
 								   replace_correlation_vars_mutator,
 								   (void *) root);
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 4f0dc80..92b0562 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1412,6 +1412,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
 	if (subquery->hasAggs ||
 		subquery->hasWindowFuncs ||
 		subquery->groupClause ||
+		subquery->groupingSets ||
 		subquery->havingQual ||
 		subquery->sortClause ||
 		subquery->distinctClause ||
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 5859748..8884fb1 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -268,13 +268,15 @@ recurse_set_operations(Node *setOp, PlannerInfo *root,
 		 */
 		if (pNumGroups)
 		{
-			if (subquery->groupClause || subquery->distinctClause ||
+			if (subquery->groupClause || subquery->groupingSets ||
+				subquery->distinctClause ||
 				subroot->hasHavingQual || subquery->hasAggs)
 				*pNumGroups = subplan->plan_rows;
 			else
 				*pNumGroups = estimate_num_groups(subroot,
 								get_tlist_exprs(subquery->targetList, false),
-												  subplan->plan_rows);
+												  subplan->plan_rows,
+												  NULL);
 		}
 
 		/*
@@ -771,6 +773,7 @@ make_union_unique(SetOperationStmt *op, Plan *plan,
 								 extract_grouping_cols(groupList,
 													   plan->targetlist),
 								 extract_grouping_ops(groupList),
+								 NIL,
 								 numGroups,
 								 plan);
 		/* Hashed aggregation produces randomly-ordered results */
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 480114d..86585c5 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4353,6 +4353,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 		querytree->jointree->fromlist ||
 		querytree->jointree->quals ||
 		querytree->groupClause ||
+		querytree->groupingSets ||
 		querytree->havingQual ||
 		querytree->windowClause ||
 		querytree->distinctClause ||
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index faca30b..9190f84 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1194,7 +1194,8 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	/* Estimate number of output rows */
 	pathnode->path.rows = estimate_num_groups(root,
 											  sjinfo->semi_rhs_exprs,
-											  rel->rows);
+											  rel->rows,
+											  NULL);
 	numCols = list_length(sjinfo->semi_rhs_exprs);
 
 	if (sjinfo->semi_can_btree)
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index a1a504b..f702b8c 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -395,6 +395,28 @@ get_sortgrouplist_exprs(List *sgClauses, List *targetList)
  *****************************************************************************/
 
 /*
+ * get_sortgroupref_clause
+ *		Find the SortGroupClause matching the given SortGroupRef index,
+ *		and return it.
+ */
+SortGroupClause *
+get_sortgroupref_clause(Index sortref, List *clauses)
+{
+	ListCell   *l;
+
+	foreach(l, clauses)
+	{
+		SortGroupClause *cl = (SortGroupClause *) lfirst(l);
+
+		if (cl->tleSortGroupRef == sortref)
+			return cl;
+	}
+
+	elog(ERROR, "ORDER/GROUP BY expression not found in list");
+	return NULL;				/* keep compiler quiet */
+}
+
+/*
  * extract_grouping_ops - make an array of the equality operator OIDs
  *		for a SortGroupClause list
  */
diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c
index 8f86432..0f25539 100644
--- a/src/backend/optimizer/util/var.c
+++ b/src/backend/optimizer/util/var.c
@@ -564,6 +564,30 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
 				break;
 		}
 	}
+	else if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup != 0)
+			elog(ERROR, "Upper-level GROUPING found where not expected");
+		switch (context->aggbehavior)
+		{
+			case PVC_REJECT_AGGREGATES:
+				elog(ERROR, "GROUPING found where not expected");
+				break;
+			case PVC_INCLUDE_AGGREGATES:
+				context->varlist = lappend(context->varlist, node);
+				/* we do NOT descend into the contained expression */
+				return false;
+			case PVC_RECURSE_AGGREGATES:
+				/*
+				 * we do NOT descend into the contained expression,
+				 * even if the caller asked for it, because we never
+				 * actually evaluate it - the result is driven entirely
+				 * off the associated GROUP BY clause, so we never need
+				 * to extract the actual Vars here.
+				 */
+				return false;
+		}
+	}
 	else if (IsA(node, PlaceHolderVar))
 	{
 		if (((PlaceHolderVar *) node)->phlevelsup != 0)
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index 3eb4fea..82c9abf 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -1060,6 +1060,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 
 	qry->groupClause = transformGroupClause(pstate,
 											stmt->groupClause,
+											&qry->groupingSets,
 											&qry->targetList,
 											qry->sortClause,
 											EXPR_KIND_GROUP_BY,
@@ -1106,7 +1107,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, stmt->lockingClause)
@@ -1566,7 +1567,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
 	qry->hasSubLinks = pstate->p_hasSubLinks;
 	qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
 	qry->hasAggs = pstate->p_hasAggs;
-	if (pstate->p_hasAggs || qry->groupClause || qry->havingQual)
+	if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
 		parseCheckAggregates(pstate, qry);
 
 	foreach(l, lockingClause)
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 2dce878..841f0d7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -371,6 +371,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 
+%type <list>	group_by_list
+%type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
+%type <node>	grouping_sets_clause
+
 %type <list>	opt_fdw_options fdw_options
 %type <defelt>	fdw_option
 
@@ -438,7 +442,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>	ExclusionConstraintList ExclusionConstraintElem
 %type <list>	func_arg_list
 %type <node>	func_arg_expr
-%type <list>	row type_list array_expr_list
+%type <list>	row explicit_row implicit_row type_list array_expr_list
 %type <node>	case_expr case_arg when_clause case_default
 %type <list>	when_clause_list
 %type <ival>	sub_type
@@ -567,7 +571,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	CLUSTER COALESCE COLLATE COLLATION COLUMN COMMENT COMMENTS COMMIT
 	COMMITTED CONCURRENTLY CONFIGURATION CONFLICT CONNECTION CONSTRAINT
 	CONSTRAINTS CONTENT_P CONTINUE_P CONVERSION_P COPY COST CREATE
-	CROSS CSV CURRENT_P
+	CROSS CSV CUBE CURRENT_P
 	CURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA
 	CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE
 
@@ -582,7 +586,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR
 	FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS
 
-	GLOBAL GRANT GRANTED GREATEST GROUP_P
+	GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING
 
 	HANDLER HAVING HEADER_P HOLD HOUR_P
 
@@ -616,12 +620,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFRESH REINDEX
 	RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA
-	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK
+	RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP
 	ROW ROWS RULE
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
-	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
-	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P START
+	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW
+	SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
 
@@ -681,6 +685,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * and for NULL so that it can follow b_expr in ColQualList without creating
  * postfix-operator problems.
  *
+ * To support CUBE and ROLLUP in GROUP BY without reserving them, we give them
+ * an explicit priority lower than '(', so that a rule with CUBE '(' will shift
+ * rather than reducing a conflicting rule that takes CUBE as a function name.
+ * Using the same precedence as IDENT seems right for the reasons given above.
+ *
  * The frame_bound productions UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING
  * are even messier: since UNBOUNDED is an unreserved keyword (per spec!),
  * there is no principled way to distinguish these from the productions
@@ -691,7 +700,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
  * blame any funny behavior of UNBOUNDED on the SQL standard, though.
  */
 %nonassoc	UNBOUNDED		/* ideally should have same precedence as IDENT */
-%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING
+%nonassoc	IDENT NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP
 %left		Op OPERATOR		/* multi-character ops and user-defined operators */
 %left		'+' '-'
 %left		'*' '/' '%'
@@ -10295,11 +10304,78 @@ first_or_next: FIRST_P								{ $$ = 0; }
 		;
 
 
+/*
+ * This syntax for group_clause tries to follow the spec quite closely.
+ * However, the spec allows only column references, not expressions,
+ * which introduces an ambiguity between implicit row constructors
+ * (a,b) and lists of column references.
+ *
+ * We handle this by using the a_expr production for what the spec calls
+ * <ordinary grouping set>, which in the spec represents either one column
+ * reference or a parenthesized list of column references. Then, we check the
+ * top node of the a_expr to see if it's an implicit RowExpr, and if so, just
+ * grab and use the list, discarding the node. (this is done in parse analysis,
+ * not here)
+ *
+ * (we abuse the row_format field of RowExpr to distinguish implicit and
+ * explicit row constructors; it's debatable if anyone sanely wants to use them
+ * in a group clause, but if they have a reason to, we make it possible.)
+ *
+ * Each item in the group_clause list is either an expression tree or a
+ * GroupingSet node of some type.
+ */
 group_clause:
-			GROUP_P BY expr_list					{ $$ = $3; }
+			GROUP_P BY group_by_list				{ $$ = $3; }
 			| /*EMPTY*/								{ $$ = NIL; }
 		;
 
+group_by_list:
+			group_by_item							{ $$ = list_make1($1); }
+			| group_by_list ',' group_by_item		{ $$ = lappend($1,$3); }
+		;
+
+group_by_item:
+			a_expr									{ $$ = $1; }
+			| empty_grouping_set					{ $$ = $1; }
+			| cube_clause							{ $$ = $1; }
+			| rollup_clause							{ $$ = $1; }
+			| grouping_sets_clause					{ $$ = $1; }
+		;
+
+empty_grouping_set:
+			'(' ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_EMPTY, NIL, @1);
+				}
+		;
+
+/*
+ * These hacks rely on setting precedence of CUBE and ROLLUP below that of '(',
+ * so that they shift in these rules rather than reducing the conflicting
+ * unreserved_keyword rule.
+ */
+
+rollup_clause:
+			ROLLUP '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_ROLLUP, $3, @1);
+				}
+		;
+
+cube_clause:
+			CUBE '(' expr_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_CUBE, $3, @1);
+				}
+		;
+
+grouping_sets_clause:
+			GROUPING SETS '(' group_by_list ')'
+				{
+					$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
+				}
+		;
+
 having_clause:
 			HAVING a_expr							{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NULL; }
@@ -11922,15 +11998,33 @@ c_expr:		columnref								{ $$ = $1; }
 					n->location = @1;
 					$$ = (Node *)n;
 				}
-			| row
+			| explicit_row
+				{
+					RowExpr *r = makeNode(RowExpr);
+					r->args = $1;
+					r->row_typeid = InvalidOid;	/* not analyzed yet */
+					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_EXPLICIT_CALL; /* abuse */
+					r->location = @1;
+					$$ = (Node *)r;
+				}
+			| implicit_row
 				{
 					RowExpr *r = makeNode(RowExpr);
 					r->args = $1;
 					r->row_typeid = InvalidOid;	/* not analyzed yet */
 					r->colnames = NIL;	/* to be filled in during analysis */
+					r->row_format = COERCE_IMPLICIT_CAST; /* abuse */
 					r->location = @1;
 					$$ = (Node *)r;
 				}
+			| GROUPING '(' expr_list ')'
+			  {
+				  GroupingFunc *g = makeNode(GroupingFunc);
+				  g->args = $3;
+				  g->location = @1;
+				  $$ = (Node *)g;
+			  }
 		;
 
 func_application: func_name '(' ')'
@@ -12680,6 +12774,13 @@ row:		ROW '(' expr_list ')'					{ $$ = $3; }
 			| '(' expr_list ',' a_expr ')'			{ $$ = lappend($2, $4); }
 		;
 
+explicit_row:	ROW '(' expr_list ')'				{ $$ = $3; }
+			| ROW '(' ')'							{ $$ = NIL; }
+		;
+
+implicit_row:	'(' expr_list ',' a_expr ')'		{ $$ = lappend($2, $4); }
+		;
+
 sub_type:	ANY										{ $$ = ANY_SUBLINK; }
 			| SOME									{ $$ = ANY_SUBLINK; }
 			| ALL									{ $$ = ALL_SUBLINK; }
@@ -13489,6 +13590,7 @@ unreserved_keyword:
 			| COPY
 			| COST
 			| CSV
+			| CUBE
 			| CURRENT_P
 			| CURSOR
 			| CYCLE
@@ -13637,6 +13739,7 @@ unreserved_keyword:
 			| REVOKE
 			| ROLE
 			| ROLLBACK
+			| ROLLUP
 			| ROWS
 			| RULE
 			| SAVEPOINT
@@ -13651,6 +13754,7 @@ unreserved_keyword:
 			| SERVER
 			| SESSION
 			| SET
+			| SETS
 			| SHARE
 			| SHOW
 			| SIMPLE
@@ -13736,6 +13840,7 @@ col_name_keyword:
 			| EXTRACT
 			| FLOAT_P
 			| GREATEST
+			| GROUPING
 			| INOUT
 			| INT_P
 			| INTEGER
diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c
index 7b0e668..1e3f2e0 100644
--- a/src/backend/parser/parse_agg.c
+++ b/src/backend/parser/parse_agg.c
@@ -42,7 +42,9 @@ typedef struct
 {
 	ParseState *pstate;
 	Query	   *qry;
+	PlannerInfo *root;
 	List	   *groupClauses;
+	List	   *groupClauseCommonVars;
 	bool		have_non_var_grouping;
 	List	  **func_grouped_rels;
 	int			sublevels_up;
@@ -56,11 +58,18 @@ static int check_agg_arguments(ParseState *pstate,
 static bool check_agg_arguments_walker(Node *node,
 						   check_agg_arguments_context *context);
 static void check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels);
 static bool check_ungrouped_columns_walker(Node *node,
 							   check_ungrouped_columns_context *context);
-
+static void finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+									List *groupClauses, PlannerInfo *root,
+									bool have_non_var_grouping);
+static bool finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context);
+static void check_agglevels_and_constraints(ParseState *pstate,Node *expr);
+static List *expand_groupingset_node(GroupingSet *gs);
 
 /*
  * transformAggregateCall -
@@ -96,10 +105,7 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	List	   *tdistinct = NIL;
 	AttrNumber	attno = 1;
 	int			save_next_resno;
-	int			min_varlevel;
 	ListCell   *lc;
-	const char *err;
-	bool		errkind;
 
 	if (AGGKIND_IS_ORDERED_SET(agg->aggkind))
 	{
@@ -214,15 +220,97 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 	agg->aggorder = torder;
 	agg->aggdistinct = tdistinct;
 
+	check_agglevels_and_constraints(pstate, (Node *) agg);
+}
+
+/*
+ * transformGroupingFunc
+ *		Transform a GROUPING expression
+ *
+ * GROUPING() behaves very like an aggregate.  Processing of levels and nesting
+ * is done as for aggregates.  We set p_hasAggs for these expressions too.
+ */
+Node *
+transformGroupingFunc(ParseState *pstate, GroupingFunc *p)
+{
+	ListCell   *lc;
+	List	   *args = p->args;
+	List	   *result_list = NIL;
+	GroupingFunc *result = makeNode(GroupingFunc);
+
+	if (list_length(args) > 31)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
+				 errmsg("GROUPING must have fewer than 32 arguments"),
+				 parser_errposition(pstate, p->location)));
+
+	foreach(lc, args)
+	{
+		Node *current_result;
+
+		current_result = transformExpr(pstate, (Node*) lfirst(lc), pstate->p_expr_kind);
+
+		/* acceptability of expressions is checked later */
+
+		result_list = lappend(result_list, current_result);
+	}
+
+	result->args = result_list;
+	result->location = p->location;
+
+	check_agglevels_and_constraints(pstate, (Node *) result);
+
+	return (Node *) result;
+}
+
+/*
+ * Aggregate functions and grouping operations (which are combined in the spec
+ * as <set function specification>) are very similar with regard to level and
+ * nesting restrictions (though we allow a lot more things than the spec does).
+ * Centralise those restrictions here.
+ */
+static void
+check_agglevels_and_constraints(ParseState *pstate, Node *expr)
+{
+	List	   *directargs = NIL;
+	List	   *args = NIL;
+	Expr	   *filter = NULL;
+	int			min_varlevel;
+	int			location = -1;
+	Index	   *p_levelsup;
+	const char *err;
+	bool		errkind;
+	bool		isAgg = IsA(expr, Aggref);
+
+	if (isAgg)
+	{
+		Aggref *agg = (Aggref *) expr;
+
+		directargs = agg->aggdirectargs;
+		args = agg->args;
+		filter = agg->aggfilter;
+		location = agg->location;
+		p_levelsup = &agg->agglevelsup;
+	}
+	else
+	{
+		GroupingFunc *grp = (GroupingFunc *) expr;
+
+		args = grp->args;
+		location = grp->location;
+		p_levelsup = &grp->agglevelsup;
+	}
+
 	/*
 	 * Check the arguments to compute the aggregate's level and detect
 	 * improper nesting.
 	 */
 	min_varlevel = check_agg_arguments(pstate,
-									   agg->aggdirectargs,
-									   agg->args,
-									   agg->aggfilter);
-	agg->agglevelsup = min_varlevel;
+									   directargs,
+									   args,
+									   filter);
+
+	*p_levelsup = min_varlevel;
 
 	/* Mark the correct pstate level as having aggregates */
 	while (min_varlevel-- > 0)
@@ -247,20 +335,32 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			Assert(false);		/* can't happen */
 			break;
 		case EXPR_KIND_OTHER:
-			/* Accept aggregate here; caller must throw error if wanted */
+			/* Accept aggregate/grouping here; caller must throw error if wanted */
 			break;
 		case EXPR_KIND_JOIN_ON:
 		case EXPR_KIND_JOIN_USING:
-			err = _("aggregate functions are not allowed in JOIN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in JOIN conditions");
+			else
+				err = _("grouping operations are not allowed in JOIN conditions");
+
 			break;
 		case EXPR_KIND_FROM_SUBSELECT:
 			/* Should only be possible in a LATERAL subquery */
 			Assert(pstate->p_lateral_active);
-			/* Aggregate scope rules make it worth being explicit here */
-			err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			/* Aggregate/grouping scope rules make it worth being explicit here */
+			if (isAgg)
+				err = _("aggregate functions are not allowed in FROM clause of their own query level");
+			else
+				err = _("grouping operations are not allowed in FROM clause of their own query level");
+
 			break;
 		case EXPR_KIND_FROM_FUNCTION:
-			err = _("aggregate functions are not allowed in functions in FROM");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in functions in FROM");
+			else
+				err = _("grouping operations are not allowed in functions in FROM");
+
 			break;
 		case EXPR_KIND_WHERE:
 			errkind = true;
@@ -278,10 +378,18 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			/* okay */
 			break;
 		case EXPR_KIND_WINDOW_FRAME_RANGE:
-			err = _("aggregate functions are not allowed in window RANGE");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window RANGE");
+			else
+				err = _("grouping operations are not allowed in window RANGE");
+
 			break;
 		case EXPR_KIND_WINDOW_FRAME_ROWS:
-			err = _("aggregate functions are not allowed in window ROWS");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in window ROWS");
+			else
+				err = _("grouping operations are not allowed in window ROWS");
+
 			break;
 		case EXPR_KIND_SELECT_TARGET:
 			/* okay */
@@ -312,26 +420,55 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			break;
 		case EXPR_KIND_CHECK_CONSTRAINT:
 		case EXPR_KIND_DOMAIN_CHECK:
-			err = _("aggregate functions are not allowed in check constraints");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in check constraints");
+			else
+				err = _("grouping operations are not allowed in check constraints");
+
 			break;
 		case EXPR_KIND_COLUMN_DEFAULT:
 		case EXPR_KIND_FUNCTION_DEFAULT:
-			err = _("aggregate functions are not allowed in DEFAULT expressions");
+
+			if (isAgg)
+				err = _("aggregate functions are not allowed in DEFAULT expressions");
+			else
+				err = _("grouping operations are not allowed in DEFAULT expressions");
+
 			break;
 		case EXPR_KIND_INDEX_EXPRESSION:
-			err = _("aggregate functions are not allowed in index expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index expressions");
+			else
+				err = _("grouping operations are not allowed in index expressions");
+
 			break;
 		case EXPR_KIND_INDEX_PREDICATE:
-			err = _("aggregate functions are not allowed in index predicates");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in index predicates");
+			else
+				err = _("grouping operations are not allowed in index predicates");
+
 			break;
 		case EXPR_KIND_ALTER_COL_TRANSFORM:
-			err = _("aggregate functions are not allowed in transform expressions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in transform expressions");
+			else
+				err = _("grouping operations are not allowed in transform expressions");
+
 			break;
 		case EXPR_KIND_EXECUTE_PARAMETER:
-			err = _("aggregate functions are not allowed in EXECUTE parameters");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in EXECUTE parameters");
+			else
+				err = _("grouping operations are not allowed in EXECUTE parameters");
+
 			break;
 		case EXPR_KIND_TRIGGER_WHEN:
-			err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			if (isAgg)
+				err = _("aggregate functions are not allowed in trigger WHEN conditions");
+			else
+				err = _("grouping operations are not allowed in trigger WHEN conditions");
+
 			break;
 
 			/*
@@ -342,18 +479,28 @@ transformAggregateCall(ParseState *pstate, Aggref *agg,
 			 * which is sane anyway.
 			 */
 	}
+
 	if (err)
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
 				 errmsg_internal("%s", err),
-				 parser_errposition(pstate, agg->location)));
+				 parser_errposition(pstate, location)));
+
 	if (errkind)
+	{
+		if (isAgg)
+			/* translator: %s is name of a SQL construct, eg GROUP BY */
+			err = _("aggregate functions are not allowed in %s");
+		else
+			/* translator: %s is name of a SQL construct, eg GROUP BY */
+			err = _("grouping operations are not allowed in %s");
+
 		ereport(ERROR,
 				(errcode(ERRCODE_GROUPING_ERROR),
-		/* translator: %s is name of a SQL construct, eg GROUP BY */
-				 errmsg("aggregate functions are not allowed in %s",
-						ParseExprKindName(pstate->p_expr_kind)),
-				 parser_errposition(pstate, agg->location)));
+				 errmsg_internal(err,
+								 ParseExprKindName(pstate->p_expr_kind)),
+				 parser_errposition(pstate, location)));
+	}
 }
 
 /*
@@ -466,7 +613,6 @@ check_agg_arguments(ParseState *pstate,
 									 locate_agg_of_level((Node *) directargs,
 													context.min_agglevel))));
 	}
-
 	return agglevel;
 }
 
@@ -507,6 +653,21 @@ check_agg_arguments_walker(Node *node,
 		/* no need to examine args of the inner aggregate */
 		return false;
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		int			agglevelsup = ((GroupingFunc *) node)->agglevelsup;
+
+		/* convert levelsup to frame of reference of original query */
+		agglevelsup -= context->sublevels_up;
+		/* ignore local aggs of subqueries */
+		if (agglevelsup >= 0)
+		{
+			if (context->min_agglevel < 0 ||
+				context->min_agglevel > agglevelsup)
+				context->min_agglevel = agglevelsup;
+		}
+		/* Continue and descend into subtree */
+	}
 	/* We can throw error on sight for a window function */
 	if (IsA(node, WindowFunc))
 		ereport(ERROR,
@@ -527,6 +688,7 @@ check_agg_arguments_walker(Node *node,
 		context->sublevels_up--;
 		return result;
 	}
+
 	return expression_tree_walker(node,
 								  check_agg_arguments_walker,
 								  (void *) context);
@@ -770,17 +932,66 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 void
 parseCheckAggregates(ParseState *pstate, Query *qry)
 {
+	List       *gset_common = NIL;
 	List	   *groupClauses = NIL;
+	List	   *groupClauseCommonVars = NIL;
 	bool		have_non_var_grouping;
 	List	   *func_grouped_rels = NIL;
 	ListCell   *l;
 	bool		hasJoinRTEs;
 	bool		hasSelfRefRTEs;
-	PlannerInfo *root;
+	PlannerInfo *root = NULL;
 	Node	   *clause;
 
 	/* This should only be called if we found aggregates or grouping */
-	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual);
+	Assert(pstate->p_hasAggs || qry->groupClause || qry->havingQual || qry->groupingSets);
+
+	/*
+	 * If we have grouping sets, expand them and find the intersection of all
+	 * sets.
+	 */
+	if (qry->groupingSets)
+	{
+		/*
+		 * The limit of 4096 is arbitrary and exists simply to avoid resource
+		 * issues from pathological constructs.
+		 */
+		List *gsets = expand_grouping_sets(qry->groupingSets, 4096);
+
+		if (!gsets)
+			ereport(ERROR,
+					(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
+					 errmsg("too many grouping sets present (max 4096)"),
+					 parser_errposition(pstate,
+										qry->groupClause
+										? exprLocation((Node *) qry->groupClause)
+										: exprLocation((Node *) qry->groupingSets))));
+
+		/*
+		 * The intersection will often be empty, so help things along by
+		 * seeding the intersect with the smallest set.
+		 */
+		gset_common = linitial(gsets);
+
+		if (gset_common)
+		{
+			for_each_cell(l, lnext(list_head(gsets)))
+			{
+				gset_common = list_intersection_int(gset_common, lfirst(l));
+				if (!gset_common)
+					break;
+			}
+		}
+
+		/*
+		 * If there was only one grouping set in the expansion, AND if the
+		 * groupClause is non-empty (meaning that the grouping set is not empty
+		 * either), then we can ditch the grouping set and pretend we just had
+		 * a normal GROUP BY.
+		 */
+		if (list_length(gsets) == 1 && qry->groupClause)
+			qry->groupingSets = NIL;
+	}
 
 	/*
 	 * Scan the range table to see if there are JOIN or self-reference CTE
@@ -800,15 +1011,19 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	/*
 	 * Build a list of the acceptable GROUP BY expressions for use by
 	 * check_ungrouped_columns().
+	 *
+	 * We get the TLE, not just the expr, because GROUPING wants to know
+	 * the sortgroupref.
 	 */
 	foreach(l, qry->groupClause)
 	{
 		SortGroupClause *grpcl = (SortGroupClause *) lfirst(l);
-		Node	   *expr;
+		TargetEntry	   *expr;
 
-		expr = get_sortgroupclause_expr(grpcl, qry->targetList);
+		expr = get_sortgroupclause_tle(grpcl, qry->targetList);
 		if (expr == NULL)
 			continue;			/* probably cannot happen */
+
 		groupClauses = lcons(expr, groupClauses);
 	}
 
@@ -830,21 +1045,28 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 		groupClauses = (List *) flatten_join_alias_vars(root,
 													  (Node *) groupClauses);
 	}
-	else
-		root = NULL;			/* keep compiler quiet */
 
 	/*
 	 * Detect whether any of the grouping expressions aren't simple Vars; if
 	 * they're all Vars then we don't have to work so hard in the recursive
 	 * scans.  (Note we have to flatten aliases before this.)
+	 *
+	 * Track Vars that are included in all grouping sets separately in
+	 * groupClauseCommonVars, since these are the only ones we can use to check
+	 * for functional dependencies.
 	 */
 	have_non_var_grouping = false;
 	foreach(l, groupClauses)
 	{
-		if (!IsA((Node *) lfirst(l), Var))
+		TargetEntry *tle = lfirst(l);
+		if (!IsA(tle->expr, Var))
 		{
 			have_non_var_grouping = true;
-			break;
+		}
+		else if (!qry->groupingSets ||
+				 list_member_int(gset_common, tle->ressortgroupref))
+		{
+			groupClauseCommonVars = lappend(groupClauseCommonVars, tle->expr);
 		}
 	}
 
@@ -855,19 +1077,30 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
 	 * this will also find ungrouped variables that came from ORDER BY and
 	 * WINDOW clauses.  For that matter, it's also going to examine the
 	 * grouping expressions themselves --- but they'll all pass the test ...
+	 *
+	 * We also finalize GROUPING expressions, but for that we need to traverse
+	 * the original (unflattened) clause in order to modify nodes.
 	 */
 	clause = (Node *) qry->targetList;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	clause = (Node *) qry->havingQual;
+	finalize_grouping_exprs(clause, pstate, qry,
+							groupClauses, root,
+							have_non_var_grouping);
 	if (hasJoinRTEs)
 		clause = flatten_join_alias_vars(root, clause);
 	check_ungrouped_columns(clause, pstate, qry,
-							groupClauses, have_non_var_grouping,
+							groupClauses, groupClauseCommonVars,
+							have_non_var_grouping,
 							&func_grouped_rels);
 
 	/*
@@ -904,14 +1137,17 @@ parseCheckAggregates(ParseState *pstate, Query *qry)
  */
 static void
 check_ungrouped_columns(Node *node, ParseState *pstate, Query *qry,
-						List *groupClauses, bool have_non_var_grouping,
+						List *groupClauses, List *groupClauseCommonVars,
+						bool have_non_var_grouping,
 						List **func_grouped_rels)
 {
 	check_ungrouped_columns_context context;
 
 	context.pstate = pstate;
 	context.qry = qry;
+	context.root = NULL;
 	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = groupClauseCommonVars;
 	context.have_non_var_grouping = have_non_var_grouping;
 	context.func_grouped_rels = func_grouped_rels;
 	context.sublevels_up = 0;
@@ -965,6 +1201,16 @@ check_ungrouped_columns_walker(Node *node,
 			return false;
 	}
 
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/* handled GroupingFunc separately, no need to recheck at this level */
+
+		if ((int) grp->agglevelsup >= context->sublevels_up)
+			return false;
+	}
+
 	/*
 	 * If we have any GROUP BY items that are not simple Vars, check to see if
 	 * subexpression as a whole matches any GROUP BY item. We need to do this
@@ -976,7 +1222,9 @@ check_ungrouped_columns_walker(Node *node,
 	{
 		foreach(gl, context->groupClauses)
 		{
-			if (equal(node, lfirst(gl)))
+			TargetEntry *tle = lfirst(gl);
+
+			if (equal(node, tle->expr))
 				return false;	/* acceptable, do not descend more */
 		}
 	}
@@ -1003,7 +1251,7 @@ check_ungrouped_columns_walker(Node *node,
 		{
 			foreach(gl, context->groupClauses)
 			{
-				Var		   *gvar = (Var *) lfirst(gl);
+				Var		   *gvar = (Var *) ((TargetEntry *) lfirst(gl))->expr;
 
 				if (IsA(gvar, Var) &&
 					gvar->varno == var->varno &&
@@ -1040,7 +1288,7 @@ check_ungrouped_columns_walker(Node *node,
 			if (check_functional_grouping(rte->relid,
 										  var->varno,
 										  0,
-										  context->groupClauses,
+										  context->groupClauseCommonVars,
 										  &context->qry->constraintDeps))
 			{
 				*context->func_grouped_rels =
@@ -1085,6 +1333,395 @@ check_ungrouped_columns_walker(Node *node,
 }
 
 /*
+ * finalize_grouping_exprs -
+ *	  Scan the given expression tree for GROUPING() and related calls,
+ *    and validate and process their arguments.
+ *
+ * This is split out from check_ungrouped_columns above because it needs
+ * to modify the nodes (which it does in-place, not via a mutator) while
+ * check_ungrouped_columns may see only a copy of the original thanks to
+ * flattening of join alias vars. So here, we flatten each individual
+ * GROUPING argument as we see it before comparing it.
+ */
+static void
+finalize_grouping_exprs(Node *node, ParseState *pstate, Query *qry,
+						List *groupClauses, PlannerInfo *root,
+						bool have_non_var_grouping)
+{
+	check_ungrouped_columns_context context;
+
+	context.pstate = pstate;
+	context.qry = qry;
+	context.root = root;
+	context.groupClauses = groupClauses;
+	context.groupClauseCommonVars = NIL;
+	context.have_non_var_grouping = have_non_var_grouping;
+	context.func_grouped_rels = NULL;
+	context.sublevels_up = 0;
+	context.in_agg_direct_args = false;
+	finalize_grouping_exprs_walker(node, &context);
+}
+
+static bool
+finalize_grouping_exprs_walker(Node *node,
+							   check_ungrouped_columns_context *context)
+{
+	ListCell   *gl;
+
+	if (node == NULL)
+		return false;
+	if (IsA(node, Const) ||
+		IsA(node, Param))
+		return false;			/* constants are always acceptable */
+
+	if (IsA(node, Aggref))
+	{
+		Aggref	   *agg = (Aggref *) node;
+
+		if ((int) agg->agglevelsup == context->sublevels_up)
+		{
+			/*
+			 * If we find an aggregate call of the original level, do not
+			 * recurse into its normal arguments, ORDER BY arguments, or
+			 * filter; GROUPING exprs of this level are not allowed there. But
+			 * check direct arguments as though they weren't in an aggregate.
+			 */
+			bool		result;
+
+			Assert(!context->in_agg_direct_args);
+			context->in_agg_direct_args = true;
+			result = finalize_grouping_exprs_walker((Node *) agg->aggdirectargs,
+													context);
+			context->in_agg_direct_args = false;
+			return result;
+		}
+
+		/*
+		 * We can skip recursing into aggregates of higher levels altogether,
+		 * since they could not possibly contain exprs of concern to us (see
+		 * transformAggregateCall).  We do need to look at aggregates of lower
+		 * levels, however.
+		 */
+		if ((int) agg->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc *grp = (GroupingFunc *) node;
+
+		/*
+		 * We only need to check GroupingFunc nodes at the exact level to which
+		 * they belong, since they cannot mix levels in arguments.
+		 */
+
+		if ((int) grp->agglevelsup == context->sublevels_up)
+		{
+			ListCell  *lc;
+			List	  *ref_list = NIL;
+
+			foreach(lc, grp->args)
+			{
+				Node   *expr = lfirst(lc);
+				Index	ref = 0;
+
+				if (context->root)
+					expr = flatten_join_alias_vars(context->root, expr);
+
+				/*
+				 * Each expression must match a grouping entry at the current
+				 * query level. Unlike the general expression case, we don't
+				 * allow functional dependencies or outer references.
+				 */
+
+				if (IsA(expr, Var))
+				{
+					Var *var = (Var *) expr;
+
+					if (var->varlevelsup == context->sublevels_up)
+					{
+						foreach(gl, context->groupClauses)
+						{
+							TargetEntry *tle = lfirst(gl);
+							Var		   *gvar = (Var *) tle->expr;
+
+							if (IsA(gvar, Var) &&
+								gvar->varno == var->varno &&
+								gvar->varattno == var->varattno &&
+								gvar->varlevelsup == 0)
+							{
+								ref = tle->ressortgroupref;
+								break;
+							}
+						}
+					}
+				}
+				else if (context->have_non_var_grouping &&
+						 context->sublevels_up == 0)
+				{
+					foreach(gl, context->groupClauses)
+					{
+						TargetEntry *tle = lfirst(gl);
+
+						if (equal(expr, tle->expr))
+						{
+							ref = tle->ressortgroupref;
+							break;
+						}
+					}
+				}
+
+				if (ref == 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_GROUPING_ERROR),
+							 errmsg("arguments to GROUPING must be grouping expressions of the associated query level"),
+							 parser_errposition(context->pstate,
+												exprLocation(expr))));
+
+				ref_list = lappend_int(ref_list, ref);
+			}
+
+			grp->refs = ref_list;
+		}
+
+		if ((int) grp->agglevelsup > context->sublevels_up)
+			return false;
+	}
+
+	if (IsA(node, Query))
+	{
+		/* Recurse into subselects */
+		bool		result;
+
+		context->sublevels_up++;
+		result = query_tree_walker((Query *) node,
+								   finalize_grouping_exprs_walker,
+								   (void *) context,
+								   0);
+		context->sublevels_up--;
+		return result;
+	}
+	return expression_tree_walker(node, finalize_grouping_exprs_walker,
+								  (void *) context);
+}
+
+
+/*
+ * Given a GroupingSet node, expand it and return a list of lists.
+ *
+ * For EMPTY nodes, return a list of one empty list.
+ *
+ * For SIMPLE nodes, return a list of one list, which is the node content.
+ *
+ * For CUBE and ROLLUP nodes, return a list of the expansions.
+ *
+ * For SET nodes, recursively expand contained CUBE and ROLLUP.
+ */
+static List*
+expand_groupingset_node(GroupingSet *gs)
+{
+	List * result = NIL;
+
+	switch (gs->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			result = list_make1(NIL);
+			break;
+
+		case GROUPING_SET_SIMPLE:
+			result = list_make1(gs->content);
+			break;
+
+		case GROUPING_SET_ROLLUP:
+			{
+				List	   *rollup_val = gs->content;
+				ListCell   *lc;
+				int			curgroup_size = list_length(gs->content);
+
+				while (curgroup_size > 0)
+				{
+					List   *current_result = NIL;
+					int		i = curgroup_size;
+
+					foreach(lc, rollup_val)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						current_result
+							= list_concat(current_result,
+										  list_copy(gs_current->content));
+
+						/* If we are done with making the current group, break */
+						if (--i == 0)
+							break;
+					}
+
+					result = lappend(result, current_result);
+					--curgroup_size;
+				}
+
+				result = lappend(result, NIL);
+			}
+			break;
+
+		case GROUPING_SET_CUBE:
+			{
+				List   *cube_list = gs->content;
+				int		number_bits = list_length(cube_list);
+				uint32	num_sets;
+				uint32	i;
+
+				/* parser should cap this much lower */
+				Assert(number_bits < 31);
+
+				num_sets = (1U << number_bits);
+
+				for (i = 0; i < num_sets; i++)
+				{
+					List *current_result = NIL;
+					ListCell *lc;
+					uint32 mask = 1U;
+
+					foreach(lc, cube_list)
+					{
+						GroupingSet *gs_current = (GroupingSet *) lfirst(lc);
+
+						Assert(gs_current->kind == GROUPING_SET_SIMPLE);
+
+						if (mask & i)
+						{
+							current_result
+								= list_concat(current_result,
+											  list_copy(gs_current->content));
+						}
+
+						mask <<= 1;
+					}
+
+					result = lappend(result, current_result);
+				}
+			}
+			break;
+
+		case GROUPING_SET_SETS:
+			{
+				ListCell   *lc;
+
+				foreach(lc, gs->content)
+				{
+					List *current_result = expand_groupingset_node(lfirst(lc));
+
+					result = list_concat(result, current_result);
+				}
+			}
+			break;
+	}
+
+	return result;
+}
+
+static int
+cmp_list_len_asc(const void *a, const void *b)
+{
+	int la = list_length(*(List*const*)a);
+	int lb = list_length(*(List*const*)b);
+	return (la > lb) ? 1 : (la == lb) ? 0 : -1;
+}
+
+/*
+ * Expand a groupingSets clause to a flat list of grouping sets.
+ * The returned list is sorted by length, shortest sets first.
+ *
+ * This is mainly for the planner, but we use it here too to do
+ * some consistency checks.
+ */
+List *
+expand_grouping_sets(List *groupingSets, int limit)
+{
+	List	   *expanded_groups = NIL;
+	List       *result = NIL;
+	double		numsets = 1;
+	ListCell   *lc;
+
+	if (groupingSets == NIL)
+		return NIL;
+
+	foreach(lc, groupingSets)
+	{
+		List *current_result = NIL;
+		GroupingSet *gs = lfirst(lc);
+
+		current_result = expand_groupingset_node(gs);
+
+		Assert(current_result != NIL);
+
+		numsets *= list_length(current_result);
+
+		if (limit >= 0 && numsets > limit)
+			return NIL;
+
+		expanded_groups = lappend(expanded_groups, current_result);
+	}
+
+	/*
+	 * Do cartesian product between sublists of expanded_groups.
+	 * While at it, remove any duplicate elements from individual
+	 * grouping sets (we must NOT change the number of sets though)
+	 */
+
+	foreach(lc, (List *) linitial(expanded_groups))
+	{
+		result = lappend(result, list_union_int(NIL, (List *) lfirst(lc)));
+	}
+
+	for_each_cell(lc, lnext(list_head(expanded_groups)))
+	{
+		List	   *p = lfirst(lc);
+		List	   *new_result = NIL;
+		ListCell   *lc2;
+
+		foreach(lc2, result)
+		{
+			List	   *q = lfirst(lc2);
+			ListCell   *lc3;
+
+			foreach(lc3, p)
+			{
+				new_result = lappend(new_result,
+									 list_union_int(q, (List *) lfirst(lc3)));
+			}
+		}
+		result = new_result;
+	}
+
+	if (list_length(result) > 1)
+	{
+		int		result_len = list_length(result);
+		List  **buf = palloc(sizeof(List*) * result_len);
+		List  **ptr = buf;
+
+		foreach(lc, result)
+		{
+			*ptr++ = lfirst(lc);
+		}
+
+		qsort(buf, result_len, sizeof(List*), cmp_list_len_asc);
+
+		result = NIL;
+		ptr = buf;
+
+		while (result_len-- > 0)
+			result = lappend(result, *ptr++);
+
+		pfree(buf);
+	}
+
+	return result;
+}
+
+/*
  * get_aggregate_argtypes
  *	Identify the specific datatypes passed to an aggregate call.
  *
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 73c505e..aca0b5e 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -39,6 +39,7 @@
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "miscadmin.h"
 
 
 /* Convenience macro for the most common makeNamespaceItem() case */
@@ -1669,40 +1670,181 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
 	return target_result;
 }
 
+/*-------------------------------------------------------------------------
+ * Flatten out parenthesized sublists in grouping lists, and some cases
+ * of nested grouping sets.
+ *
+ * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
+ * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
+ * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
+ * normalize to ((a,b,c),(d)).
+ *
+ * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
+ * and we leave that alone if we find it. But if we see GROUPING SETS inside
+ * GROUPING SETS, we can flatten and normalize as follows:
+ *   GROUPING SETS (a, (b,c), GROUPING SETS ((c,d),(e)), (f,g))
+ * becomes
+ *   GROUPING SETS ((a), (b,c), (c,d), (e), (f,g))
+ *
+ * This is per the spec's syntax transformations, but these are the only such
+ * transformations we do in parse analysis, so that queries retain the
+ * originally specified grouping set syntax for CUBE and ROLLUP as much as
+ * possible when deparsed. (Full expansion of the result into a list of
+ * grouping sets is left to the planner.)
+ *
+ * When we're done, the resulting list should contain only these possible
+ * elements:
+ *   - an expression
+ *   - a CUBE or ROLLUP with a list of expressions nested 2 deep
+ *   - a GROUPING SET containing any of:
+ *      - expression lists
+ *      - empty grouping sets
+ *      - CUBE or ROLLUP nodes with lists nested 2 deep
+ * The return is a new list, but doesn't deep-copy the old nodes except for
+ * GroupingSet nodes.
+ *
+ * As a side effect, flag whether the list has any GroupingSet nodes.
+ *-------------------------------------------------------------------------
+ */
+static Node *
+flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
+{
+	/* just in case of pathological input */
+	check_stack_depth();
+
+	if (expr == (Node *) NIL)
+		return (Node *) NIL;
+
+	switch (expr->type)
+	{
+		case T_RowExpr:
+			{
+				RowExpr *r = (RowExpr *) expr;
+				if (r->row_format == COERCE_IMPLICIT_CAST)
+					return flatten_grouping_sets((Node *) r->args,
+												 false, NULL);
+			}
+			break;
+		case T_GroupingSet:
+			{
+				GroupingSet *gset = (GroupingSet *) expr;
+				ListCell   *l2;
+				List	   *result_set = NIL;
+
+				if (hasGroupingSets)
+					*hasGroupingSets = true;
+
+				/*
+				 * at the top level, we skip over all empty grouping sets; the
+				 * caller can supply the canonical GROUP BY () if nothing is left.
+				 */
+
+				if (toplevel && gset->kind == GROUPING_SET_EMPTY)
+					return (Node *) NIL;
+
+				foreach(l2, gset->content)
+				{
+					Node   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+
+					result_set = lappend(result_set, n2);
+				}
+
+				/*
+				 * At top level, keep the grouping set node; but if we're in a nested
+				 * grouping set, then we need to concat the flattened result into the
+				 * outer list if it's simply nested.
+				 */
+
+				if (toplevel || (gset->kind != GROUPING_SET_SETS))
+				{
+					return (Node *) makeGroupingSet(gset->kind, result_set, gset->location);
+				}
+				else
+					return (Node *) result_set;
+			}
+		case T_List:
+			{
+				List	   *result = NIL;
+				ListCell   *l;
+
+				foreach(l, (List *)expr)
+				{
+					Node   *n = flatten_grouping_sets(lfirst(l), toplevel, hasGroupingSets);
+					if (n != (Node *) NIL)
+					{
+						if (IsA(n,List))
+							result = list_concat(result, (List *) n);
+						else
+							result = lappend(result, n);
+					}
+				}
+
+				return (Node *) result;
+			}
+		default:
+			break;
+	}
+
+	return expr;
+}
+
 /*
- * transformGroupClause -
- *	  transform a GROUP BY clause
+ * Transform a single expression within a GROUP BY clause or grouping set.
  *
- * GROUP BY items will be added to the targetlist (as resjunk columns)
- * if not already present, so the targetlist must be passed by reference.
+ * The expression is added to the targetlist if not already present, and to the
+ * flatresult list (which will become the groupClause) if not already present
+ * there.  The sortClause is consulted for operator and sort order hints.
  *
- * This is also used for window PARTITION BY clauses (which act almost the
- * same, but are always interpreted per SQL99 rules).
+ * Returns the ressortgroupref of the expression.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * seen_local	bitmapset of sortgrouprefs already seen at the local level
+ * pstate		ParseState
+ * gexpr		node to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
  */
-List *
-transformGroupClause(ParseState *pstate, List *grouplist,
-					 List **targetlist, List *sortClause,
-					 ParseExprKind exprKind, bool useSQL99)
+static Index
+transformGroupClauseExpr(List **flatresult, Bitmapset *seen_local,
+						 ParseState *pstate, Node *gexpr,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
 {
-	List	   *result = NIL;
-	ListCell   *gl;
+	TargetEntry *tle;
+	bool		found = false;
+
+	if (useSQL99)
+		tle = findTargetlistEntrySQL99(pstate, gexpr,
+									   targetlist, exprKind);
+	else
+		tle = findTargetlistEntrySQL92(pstate, gexpr,
+									   targetlist, exprKind);
 
-	foreach(gl, grouplist)
+	if (tle->ressortgroupref > 0)
 	{
-		Node	   *gexpr = (Node *) lfirst(gl);
-		TargetEntry *tle;
-		bool		found = false;
+		ListCell   *sl;
 
-		if (useSQL99)
-			tle = findTargetlistEntrySQL99(pstate, gexpr,
-										   targetlist, exprKind);
-		else
-			tle = findTargetlistEntrySQL92(pstate, gexpr,
-										   targetlist, exprKind);
+		/*
+		 * Eliminate duplicates (GROUP BY x, x) but only at local level.
+		 * (Duplicates in grouping sets can affect the number of returned
+		 * rows, so can't be dropped indiscriminately.)
+		 *
+		 * Since we don't care about anything except the sortgroupref,
+		 * we can use a bitmapset rather than scanning lists.
+		 */
+		if (bms_is_member(tle->ressortgroupref,seen_local))
+			return 0;
 
-		/* Eliminate duplicates (GROUP BY x, x) */
-		if (targetIsInSortList(tle, InvalidOid, result))
-			continue;
+		/*
+		 * If we're already in the flat clause list, we don't need
+		 * to consider adding ourselves again.
+		 */
+		found = targetIsInSortList(tle, InvalidOid, *flatresult);
+		if (found)
+			return tle->ressortgroupref;
 
 		/*
 		 * If the GROUP BY tlist entry also appears in ORDER BY, copy operator
@@ -1714,35 +1856,308 @@ transformGroupClause(ParseState *pstate, List *grouplist,
 		 * sort step, and it allows the user to choose the equality semantics
 		 * used by GROUP BY, should she be working with a datatype that has
 		 * more than one equality operator.
+		 *
+		 * If we're in a grouping set, though, we force our requested ordering
+		 * to be NULLS LAST, because if we have any hope of using a sorted agg
+		 * for the job, we're going to be tacking on generated NULL values
+		 * after the corresponding groups. If the user demands nulls first,
+		 * another sort step is going to be inevitable, but that's the
+		 * planner's problem.
 		 */
-		if (tle->ressortgroupref > 0)
+
+		foreach(sl, sortClause)
 		{
-			ListCell   *sl;
+			SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
 
-			foreach(sl, sortClause)
+			if (sc->tleSortGroupRef == tle->ressortgroupref)
 			{
-				SortGroupClause *sc = (SortGroupClause *) lfirst(sl);
+				SortGroupClause *grpc = copyObject(sc);
+				if (!toplevel)
+					grpc->nulls_first = false;
+				*flatresult = lappend(*flatresult, grpc);
+				found = true;
+				break;
+			}
+		}
+	}
 
-				if (sc->tleSortGroupRef == tle->ressortgroupref)
-				{
-					result = lappend(result, copyObject(sc));
-					found = true;
+	/*
+	 * If no match in ORDER BY, just add it to the result using default
+	 * sort/group semantics.
+	 */
+	if (!found)
+		*flatresult = addTargetToGroupList(pstate, tle,
+										   *flatresult, *targetlist,
+										   exprLocation(gexpr),
+										   true);
+
+	/*
+	 * _something_ must have assigned us a sortgroupref by now...
+	 */
+
+	return tle->ressortgroupref;
+}
+
+/*
+ * Transform a list of expressions within a GROUP BY clause or grouping set.
+ *
+ * The list of expressions belongs to a single clause within which duplicates
+ * can be safely eliminated.
+ *
+ * Returns an integer list of ressortgroupref values.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * list			nodes to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static List *
+transformGroupClauseList(List **flatresult,
+						 ParseState *pstate, List *list,
+						 List **targetlist, List *sortClause,
+						 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	Bitmapset  *seen_local = NULL;
+	List	   *result = NIL;
+	ListCell   *gl;
+
+	foreach(gl, list)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		Index ref = transformGroupClauseExpr(flatresult,
+											 seen_local,
+											 pstate,
+											 gexpr,
+											 targetlist,
+											 sortClause,
+											 exprKind,
+											 useSQL99,
+											 toplevel);
+		if (ref > 0)
+		{
+			seen_local = bms_add_member(seen_local, ref);
+			result = lappend_int(result, ref);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Transform a grouping set and (recursively) its content.
+ *
+ * The grouping set might be a GROUPING SETS node with other grouping sets
+ * inside it, but SETS within SETS have already been flattened out before
+ * reaching here.
+ *
+ * Returns the transformed node, which now contains SIMPLE nodes with lists
+ * of ressortgrouprefs rather than expressions.
+ *
+ * flatresult	reference to flat list of SortGroupClause nodes
+ * pstate		ParseState
+ * gset			grouping set to transform
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ * toplevel		false if within any grouping set
+ */
+static Node *
+transformGroupingSet(List **flatresult,
+					 ParseState *pstate, GroupingSet *gset,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99, bool toplevel)
+{
+	ListCell   *gl;
+	List	   *content = NIL;
+
+	Assert(toplevel || gset->kind != GROUPING_SET_SETS);
+
+	foreach(gl, gset->content)
+	{
+		Node   *n = lfirst(gl);
+
+		if (IsA(n, List))
+		{
+			List *l = transformGroupClauseList(flatresult,
+											   pstate, (List *) n,
+											   targetlist, sortClause,
+											   exprKind, useSQL99, false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   l,
+													   exprLocation(n)));
+		}
+		else if (IsA(n, GroupingSet))
+		{
+			GroupingSet *gset2 = (GroupingSet *) lfirst(gl);
+
+			content = lappend(content, transformGroupingSet(flatresult,
+															pstate, gset2,
+															targetlist, sortClause,
+															exprKind, useSQL99, false));
+		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(flatresult,
+												 NULL,
+												 pstate,
+												 n,
+												 targetlist,
+												 sortClause,
+												 exprKind,
+												 useSQL99,
+												 false);
+
+			content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE,
+													   list_make1_int(ref),
+													   exprLocation(n)));
+		}
+	}
+
+	/* Arbitrarily cap the size of CUBE, which has exponential growth */
+	if (gset->kind == GROUPING_SET_CUBE)
+	{
+		if (list_length(content) > 12)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("CUBE is limited to 12 elements"),
+					 parser_errposition(pstate, gset->location)));
+	}
+
+	return (Node *) makeGroupingSet(gset->kind, content, gset->location);
+}
+
+
+/*
+ * transformGroupClause -
+ *	  transform a GROUP BY clause
+ *
+ * GROUP BY items will be added to the targetlist (as resjunk columns)
+ * if not already present, so the targetlist must be passed by reference.
+ *
+ * This is also used for window PARTITION BY clauses (which act almost the
+ * same, but are always interpreted per SQL99 rules).
+ *
+ * Grouping sets make this a lot more complex than it was. Our goal here is
+ * twofold: we make a flat list of SortGroupClause nodes referencing each
+ * distinct expression used for grouping, with those expressions added to the
+ * targetlist if needed. At the same time, we build the groupingSets tree,
+ * which stores only ressortgrouprefs as integer lists inside GroupingSet nodes
+ * (possibly nested, but limited in depth: a GROUPING_SET_SETS node can contain
+ * nested SIMPLE, CUBE or ROLLUP nodes, but not more sets - we flatten that
+ * out; while CUBE and ROLLUP can contain only SIMPLE nodes).
+ *
+ * We skip much of the hard work if there are no grouping sets.
+ *
+ * One subtlety is that the groupClause list can end up empty while the
+ * groupingSets list is not; this happens if there are only empty grouping
+ * sets, or an explicit GROUP BY (). This has the same effect as specifying
+ * aggregates or a HAVING clause with no GROUP BY; the output is one row per
+ * grouping set even if the input is empty.
+ *
+ * Returns the transformed (flat) groupClause.
+ *
+ * pstate		ParseState
+ * grouplist	clause to transform
+ * groupingSets	reference to list to contain the grouping set tree
+ * targetlist	reference to TargetEntry list
+ * sortClause	ORDER BY clause (SortGroupClause nodes)
+ * exprKind		expression kind
+ * useSQL99		SQL99 rather than SQL92 syntax
+ */
+List *
+transformGroupClause(ParseState *pstate, List *grouplist, List **groupingSets,
+					 List **targetlist, List *sortClause,
+					 ParseExprKind exprKind, bool useSQL99)
+{
+	List	   *result = NIL;
+	List	   *flat_grouplist;
+	List	   *gsets = NIL;
+	ListCell   *gl;
+	bool        hasGroupingSets = false;
+	Bitmapset  *seen_local = NULL;
+
+	/*
+	 * Recursively flatten implicit RowExprs. (Technically this is only
+	 * needed for GROUP BY, per the syntax rules for grouping sets, but
+	 * we do it anyway.)
+	 */
+	flat_grouplist = (List *) flatten_grouping_sets((Node *) grouplist,
+													true,
+													&hasGroupingSets);
+
+	/*
+	 * If the list is now empty, but hasGroupingSets is true, it's because
+	 * we elided redundant empty grouping sets. Restore a single empty
+	 * grouping set to leave a canonical form: GROUP BY ()
+	 */
+
+	if (flat_grouplist == NIL && hasGroupingSets)
+	{
+		flat_grouplist = list_make1(makeGroupingSet(GROUPING_SET_EMPTY,
+													NIL,
+													exprLocation((Node *) grouplist)));
+	}
+
+	foreach(gl, flat_grouplist)
+	{
+		Node        *gexpr = (Node *) lfirst(gl);
+
+		if (IsA(gexpr, GroupingSet))
+		{
+			GroupingSet *gset = (GroupingSet *) gexpr;
+
+			switch (gset->kind)
+			{
+				case GROUPING_SET_EMPTY:
+					gsets = lappend(gsets, gset);
+					break;
+				case GROUPING_SET_SIMPLE:
+					/* can't happen */
+					Assert(false);
+					break;
+				case GROUPING_SET_SETS:
+				case GROUPING_SET_CUBE:
+				case GROUPING_SET_ROLLUP:
+					gsets = lappend(gsets,
+									transformGroupingSet(&result,
+														 pstate, gset,
+														 targetlist, sortClause,
+														 exprKind, useSQL99, true));
 					break;
-				}
 			}
 		}
+		else
+		{
+			Index ref = transformGroupClauseExpr(&result, seen_local,
+												 pstate, gexpr,
+												 targetlist, sortClause,
+												 exprKind, useSQL99, true);
 
-		/*
-		 * If no match in ORDER BY, just add it to the result using default
-		 * sort/group semantics.
-		 */
-		if (!found)
-			result = addTargetToGroupList(pstate, tle,
-										  result, *targetlist,
-										  exprLocation(gexpr),
-										  true);
+			if (ref > 0)
+			{
+				seen_local = bms_add_member(seen_local, ref);
+				if (hasGroupingSets)
+					gsets = lappend(gsets,
+									makeGroupingSet(GROUPING_SET_SIMPLE,
+													list_make1_int(ref),
+													exprLocation(gexpr)));
+			}
+		}
 	}
 
+	/* parser should prevent this */
+	Assert(gsets == NIL || groupingSets != NULL);
+
+	if (groupingSets)
+		*groupingSets = gsets;
+
 	return result;
 }
 
@@ -1847,6 +2262,7 @@ transformWindowDefinitions(ParseState *pstate,
 										  true /* force SQL99 rules */ );
 		partitionClause = transformGroupClause(pstate,
 											   windef->partitionClause,
+											   NULL,
 											   targetlist,
 											   orderClause,
 											   EXPR_KIND_WINDOW_PARTITION,
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index f759606..0ff46dd 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -32,6 +32,7 @@
 #include "parser/parse_relation.h"
 #include "parser/parse_target.h"
 #include "parser/parse_type.h"
+#include "parser/parse_agg.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/xml.h"
@@ -269,6 +270,10 @@ transformExprRecurse(ParseState *pstate, Node *expr)
 			result = transformMultiAssignRef(pstate, (MultiAssignRef *) expr);
 			break;
 
+		case T_GroupingFunc:
+			result = transformGroupingFunc(pstate, (GroupingFunc *) expr);
+			break;
+
 		case T_NamedArgExpr:
 			{
 				NamedArgExpr *na = (NamedArgExpr *) expr;
diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c
index 59973ba..1b3fcd6 100644
--- a/src/backend/parser/parse_target.c
+++ b/src/backend/parser/parse_target.c
@@ -1681,6 +1681,10 @@ FigureColnameInternal(Node *node, char **name)
 			break;
 		case T_CollateClause:
 			return FigureColnameInternal(((CollateClause *) node)->arg, name);
+		case T_GroupingFunc:
+			/* make GROUPING() act like a regular function */
+			*name = "grouping";
+			return 2;
 		case T_SubLink:
 			switch (((SubLink *) node)->subLinkType)
 			{
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index 39302a4..84fbf2e 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -2158,7 +2158,7 @@ view_query_is_auto_updatable(Query *viewquery, bool check_cols)
 	if (viewquery->distinctClause != NIL)
 		return gettext_noop("Views containing DISTINCT are not automatically updatable.");
 
-	if (viewquery->groupClause != NIL)
+	if (viewquery->groupClause != NIL || viewquery->groupingSets)
 		return gettext_noop("Views containing GROUP BY are not automatically updatable.");
 
 	if (viewquery->havingQual != NULL)
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index a9c6e62..e3dfdef 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -92,6 +92,12 @@ contain_aggs_of_level_walker(Node *node,
 			return true;		/* abort the tree traversal and return true */
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up)
+			return true;
+		/* else fall through to examine argument */
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -157,6 +163,15 @@ locate_agg_of_level_walker(Node *node,
 		}
 		/* else fall through to examine argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		if (((GroupingFunc *) node)->agglevelsup == context->sublevels_up &&
+			((GroupingFunc *) node)->location >= 0)
+		{
+			context->agg_location = ((GroupingFunc *) node)->location;
+			return true;		/* abort the tree traversal and return true */
+		}
+	}
 	if (IsA(node, Query))
 	{
 		/* Recurse into subselects */
@@ -712,6 +727,14 @@ IncrementVarSublevelsUp_walker(Node *node,
 			agg->agglevelsup += context->delta_sublevels_up;
 		/* fall through to recurse into argument */
 	}
+	if (IsA(node, GroupingFunc))
+	{
+		GroupingFunc   *grp = (GroupingFunc *) node;
+
+		if (grp->agglevelsup >= context->min_sublevels_up)
+			grp->agglevelsup += context->delta_sublevels_up;
+		/* fall through to recurse into argument */
+	}
 	if (IsA(node, PlaceHolderVar))
 	{
 		PlaceHolderVar *phv = (PlaceHolderVar *) node;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 903e80a..362f6ea 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -43,6 +43,7 @@
 #include "nodes/nodeFuncs.h"
 #include "optimizer/tlist.h"
 #include "parser/keywords.h"
+#include "parser/parse_node.h"
 #include "parser/parse_agg.h"
 #include "parser/parse_func.h"
 #include "parser/parse_oper.h"
@@ -104,6 +105,8 @@ typedef struct
 	int			wrapColumn;		/* max line length, or -1 for no limit */
 	int			indentLevel;	/* current indent level for prettyprint */
 	bool		varprefix;		/* TRUE to print prefixes on Vars */
+	ParseExprKind special_exprkind;	/* set only for exprkinds needing */
+									/* special handling */
 } deparse_context;
 
 /*
@@ -366,9 +369,11 @@ static void get_target_list(List *targetList, deparse_context *context,
 static void get_setop_query(Node *setOp, Query *query,
 				deparse_context *context,
 				TupleDesc resultDesc);
-static Node *get_rule_sortgroupclause(SortGroupClause *srt, List *tlist,
+static Node *get_rule_sortgroupclause(Index ref, List *tlist,
 						 bool force_colno,
 						 deparse_context *context);
+static void get_rule_groupingset(GroupingSet *gset, List *targetlist,
+								 bool omit_parens, deparse_context *context);
 static void get_rule_orderby(List *orderList, List *targetList,
 				 bool force_colno, deparse_context *context);
 static void get_rule_windowclause(Query *query, deparse_context *context);
@@ -416,8 +421,9 @@ static void printSubscripts(ArrayRef *aref, deparse_context *context);
 static char *get_relation_name(Oid relid);
 static char *generate_relation_name(Oid relid, List *namespaces);
 static char *generate_function_name(Oid funcid, int nargs,
-					   List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p);
+							List *argnames, Oid *argtypes,
+							bool has_variadic, bool *use_variadic_p,
+							ParseExprKind special_exprkind);
 static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2);
 static text *string_to_text(char *str);
 static char *flatten_reloptions(Oid relid);
@@ -875,6 +881,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 		context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		get_rule_expr(qual, &context, false);
 
@@ -884,7 +891,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty)
 	appendStringInfo(&buf, "EXECUTE PROCEDURE %s(",
 					 generate_function_name(trigrec->tgfoid, 0,
 											NIL, NULL,
-											false, NULL));
+											false, NULL, EXPR_KIND_NONE));
 
 	if (trigrec->tgnargs > 0)
 	{
@@ -2499,6 +2506,7 @@ deparse_expression_pretty(Node *expr, List *dpcontext,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = WRAP_COLUMN_DEFAULT;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	get_rule_expr(expr, &context, showimplicit);
 
@@ -4109,6 +4117,7 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc,
 		context.prettyFlags = prettyFlags;
 		context.wrapColumn = WRAP_COLUMN_DEFAULT;
 		context.indentLevel = PRETTYINDENT_STD;
+		context.special_exprkind = EXPR_KIND_NONE;
 
 		set_deparse_for_query(&dpns, query, NIL);
 
@@ -4260,6 +4269,7 @@ get_query_def(Query *query, StringInfo buf, List *parentnamespace,
 	context.prettyFlags = prettyFlags;
 	context.wrapColumn = wrapColumn;
 	context.indentLevel = startIndent;
+	context.special_exprkind = EXPR_KIND_NONE;
 
 	set_deparse_for_query(&dpns, query, parentnamespace);
 
@@ -4630,7 +4640,7 @@ get_basic_select_query(Query *query, deparse_context *context,
 				SortGroupClause *srt = (SortGroupClause *) lfirst(l);
 
 				appendStringInfoString(buf, sep);
-				get_rule_sortgroupclause(srt, query->targetList,
+				get_rule_sortgroupclause(srt->tleSortGroupRef, query->targetList,
 										 false, context);
 				sep = ", ";
 			}
@@ -4655,20 +4665,43 @@ get_basic_select_query(Query *query, deparse_context *context,
 	}
 
 	/* Add the GROUP BY clause if given */
-	if (query->groupClause != NULL)
+	if (query->groupClause != NULL || query->groupingSets != NULL)
 	{
+		ParseExprKind	save_exprkind;
+
 		appendContextKeyword(context, " GROUP BY ",
 							 -PRETTYINDENT_STD, PRETTYINDENT_STD, 1);
-		sep = "";
-		foreach(l, query->groupClause)
+
+		save_exprkind = context->special_exprkind;
+		context->special_exprkind = EXPR_KIND_GROUP_BY;
+
+		if (query->groupingSets == NIL)
 		{
-			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
+			sep = "";
+			foreach(l, query->groupClause)
+			{
+				SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
-			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, query->targetList,
-									 false, context);
-			sep = ", ";
+				appendStringInfoString(buf, sep);
+				get_rule_sortgroupclause(grp->tleSortGroupRef, query->targetList,
+										 false, context);
+				sep = ", ";
+			}
 		}
+		else
+		{
+			sep = "";
+			foreach(l, query->groupingSets)
+			{
+				GroupingSet *grp = lfirst(l);
+
+				appendStringInfoString(buf, sep);
+				get_rule_groupingset(grp, query->targetList, true, context);
+				sep = ", ";
+			}
+		}
+
+		context->special_exprkind = save_exprkind;
 	}
 
 	/* Add the HAVING clause if given */
@@ -4735,7 +4768,7 @@ get_target_list(List *targetList, deparse_context *context,
 		 * different from a whole-row Var).  We need to call get_variable
 		 * directly so that we can tell it to do the right thing.
 		 */
-		if (tle->expr && IsA(tle->expr, Var))
+		if (tle->expr && (IsA(tle->expr, Var)))
 		{
 			attname = get_variable((Var *) tle->expr, 0, true, context);
 		}
@@ -4954,23 +4987,24 @@ get_setop_query(Node *setOp, Query *query, deparse_context *context,
  * Also returns the expression tree, so caller need not find it again.
  */
 static Node *
-get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
+get_rule_sortgroupclause(Index ref, List *tlist, bool force_colno,
 						 deparse_context *context)
 {
 	StringInfo	buf = context->buf;
 	TargetEntry *tle;
 	Node	   *expr;
 
-	tle = get_sortgroupclause_tle(srt, tlist);
+	tle = get_sortgroupref_tle(ref, tlist);
 	expr = (Node *) tle->expr;
 
 	/*
-	 * Use column-number form if requested by caller.  Otherwise, if
-	 * expression is a constant, force it to be dumped with an explicit cast
-	 * as decoration --- this is because a simple integer constant is
-	 * ambiguous (and will be misinterpreted by findTargetlistEntry()) if we
-	 * dump it without any decoration.  Otherwise, just dump the expression
-	 * normally.
+	 * Use column-number form if requested by caller.  Otherwise, if expression
+	 * is a constant, force it to be dumped with an explicit cast as decoration
+	 * --- this is because a simple integer constant is ambiguous (and will be
+	 * misinterpreted by findTargetlistEntry()) if we dump it without any
+	 * decoration.  If it's anything more complex than a simple Var, then force
+	 * extra parens around it, to ensure it can't be misinterpreted as a cube()
+	 * or rollup() construct.
 	 */
 	if (force_colno)
 	{
@@ -4979,13 +5013,92 @@ get_rule_sortgroupclause(SortGroupClause *srt, List *tlist, bool force_colno,
 	}
 	else if (expr && IsA(expr, Const))
 		get_const_expr((Const *) expr, context, 1);
+	else if (!expr || IsA(expr, Var))
+		get_rule_expr(expr, context, true);
 	else
+	{
+		/*
+		 * We must force parens for function-like expressions even if
+		 * PRETTY_PAREN is off, since those are the ones in danger of
+		 * misparsing. For other expressions we need to force them
+		 * only if PRETTY_PAREN is on, since otherwise the expression
+		 * will output them itself. (We can't skip the parens.)
+		 */
+		bool	need_paren = (PRETTY_PAREN(context)
+							  || IsA(expr, FuncExpr)
+							  || IsA(expr, Aggref)
+							  || IsA(expr, WindowFunc));
+		if (need_paren)
+			appendStringInfoString(context->buf, "(");
 		get_rule_expr(expr, context, true);
+		if (need_paren)
+			appendStringInfoString(context->buf, ")");
+	}
 
 	return expr;
 }
 
 /*
+ * Display a GroupingSet
+ */
+static void
+get_rule_groupingset(GroupingSet *gset, List *targetlist,
+					 bool omit_parens, deparse_context *context)
+{
+	ListCell   *l;
+	StringInfo	buf = context->buf;
+	bool		omit_child_parens = true;
+	char	   *sep = "";
+
+	switch (gset->kind)
+	{
+		case GROUPING_SET_EMPTY:
+			appendStringInfoString(buf, "()");
+			return;
+
+		case GROUPING_SET_SIMPLE:
+			{
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, "(");
+
+				foreach(l, gset->content)
+				{
+					Index ref = lfirst_int(l);
+
+					appendStringInfoString(buf, sep);
+					get_rule_sortgroupclause(ref, targetlist,
+											 false, context);
+					sep = ", ";
+				}
+
+				if (!omit_parens || list_length(gset->content) != 1)
+					appendStringInfoString(buf, ")");
+			}
+			return;
+
+		case GROUPING_SET_ROLLUP:
+			appendStringInfoString(buf, "ROLLUP(");
+			break;
+		case GROUPING_SET_CUBE:
+			appendStringInfoString(buf, "CUBE(");
+			break;
+		case GROUPING_SET_SETS:
+			appendStringInfoString(buf, "GROUPING SETS (");
+			omit_child_parens = false;
+			break;
+	}
+
+	foreach(l, gset->content)
+	{
+		appendStringInfoString(buf, sep);
+		get_rule_groupingset(lfirst(l), targetlist, omit_child_parens, context);
+		sep = ", ";
+	}
+
+	appendStringInfoString(buf, ")");
+}
+
+/*
  * Display an ORDER BY list.
  */
 static void
@@ -5005,7 +5118,7 @@ get_rule_orderby(List *orderList, List *targetList,
 		TypeCacheEntry *typentry;
 
 		appendStringInfoString(buf, sep);
-		sortexpr = get_rule_sortgroupclause(srt, targetList,
+		sortexpr = get_rule_sortgroupclause(srt->tleSortGroupRef, targetList,
 											force_colno, context);
 		sortcoltype = exprType(sortexpr);
 		/* See whether operator is default < or > for datatype */
@@ -5105,7 +5218,7 @@ get_rule_windowspec(WindowClause *wc, List *targetList,
 			SortGroupClause *grp = (SortGroupClause *) lfirst(l);
 
 			appendStringInfoString(buf, sep);
-			get_rule_sortgroupclause(grp, targetList,
+			get_rule_sortgroupclause(grp->tleSortGroupRef, targetList,
 									 false, context);
 			sep = ", ";
 		}
@@ -6832,6 +6945,16 @@ get_rule_expr(Node *node, deparse_context *context,
 			get_agg_expr((Aggref *) node, context);
 			break;
 
+		case T_GroupingFunc:
+			{
+				GroupingFunc *gexpr = (GroupingFunc *) node;
+
+				appendStringInfoString(buf, "GROUPING(");
+				get_rule_expr((Node *) gexpr->args, context, true);
+				appendStringInfoChar(buf, ')');
+			}
+			break;
+
 		case T_WindowFunc:
 			get_windowfunc_expr((WindowFunc *) node, context);
 			break;
@@ -7870,7 +7993,8 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
 					 generate_function_name(funcoid, nargs,
 											argnames, argtypes,
 											expr->funcvariadic,
-											&use_variadic));
+											&use_variadic,
+											context->special_exprkind));
 	nargs = 0;
 	foreach(l, expr->args)
 	{
@@ -7902,7 +8026,8 @@ get_agg_expr(Aggref *aggref, deparse_context *context)
 					 generate_function_name(aggref->aggfnoid, nargs,
 											NIL, argtypes,
 											aggref->aggvariadic,
-											&use_variadic),
+											&use_variadic,
+											context->special_exprkind),
 					 (aggref->aggdistinct != NIL) ? "DISTINCT " : "");
 
 	if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
@@ -7992,7 +8117,8 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
 	appendStringInfo(buf, "%s(",
 					 generate_function_name(wfunc->winfnoid, nargs,
 											argnames, argtypes,
-											false, NULL));
+											false, NULL,
+											context->special_exprkind));
 	/* winstar can be set only in zero-argument aggregates */
 	if (wfunc->winstar)
 		appendStringInfoChar(buf, '*');
@@ -9241,7 +9367,8 @@ generate_relation_name(Oid relid, List *namespaces)
  */
 static char *
 generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
-					   bool has_variadic, bool *use_variadic_p)
+					   bool has_variadic, bool *use_variadic_p,
+					   ParseExprKind special_exprkind)
 {
 	char	   *result;
 	HeapTuple	proctup;
@@ -9256,6 +9383,7 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	int			p_nvargs;
 	Oid			p_vatype;
 	Oid		   *p_true_typeids;
+	bool		force_qualify = false;
 
 	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
 	if (!HeapTupleIsValid(proctup))
@@ -9264,6 +9392,16 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	proname = NameStr(procform->proname);
 
 	/*
+	 * Due to parser hacks to avoid needing to reserve CUBE, we need to force
+	 * qualification in some special cases.
+	 */
+	if (special_exprkind == EXPR_KIND_GROUP_BY)
+	{
+		if (strcmp(proname, "cube") == 0 || strcmp(proname, "rollup") == 0)
+			force_qualify = true;
+	}
+
+	/*
 	 * Determine whether VARIADIC should be printed.  We must do this first
 	 * since it affects the lookup rules in func_get_detail().
 	 *
@@ -9294,14 +9432,23 @@ generate_function_name(Oid funcid, int nargs, List *argnames, Oid *argtypes,
 	/*
 	 * The idea here is to schema-qualify only if the parser would fail to
 	 * resolve the correct function given the unqualified func name with the
-	 * specified argtypes and VARIADIC flag.
+	 * specified argtypes and VARIADIC flag.  But if we already decided to
+	 * force qualification, then we can skip the lookup and pretend we didn't
+	 * find it.
 	 */
-	p_result = func_get_detail(list_make1(makeString(proname)),
-							   NIL, argnames, nargs, argtypes,
-							   !use_variadic, true,
-							   &p_funcid, &p_rettype,
-							   &p_retset, &p_nvargs, &p_vatype,
-							   &p_true_typeids, NULL);
+	if (!force_qualify)
+		p_result = func_get_detail(list_make1(makeString(proname)),
+								   NIL, argnames, nargs, argtypes,
+								   !use_variadic, true,
+								   &p_funcid, &p_rettype,
+								   &p_retset, &p_nvargs, &p_vatype,
+								   &p_true_typeids, NULL);
+	else
+	{
+		p_result = FUNCDETAIL_NOTFOUND;
+		p_funcid = InvalidOid;
+	}
+
 	if ((p_result == FUNCDETAIL_NORMAL ||
 		 p_result == FUNCDETAIL_AGGREGATE ||
 		 p_result == FUNCDETAIL_WINDOWFUNC) &&
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 91399f7..04ed07b 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3158,6 +3158,8 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  *	groupExprs - list of expressions being grouped by
  *	input_rows - number of rows estimated to arrive at the group/unique
  *		filter step
+ *	pgset - NULL, or a List** pointing to a grouping set to filter the
+ *		groupExprs against
  *
  * Given the lack of any cross-correlation statistics in the system, it's
  * impossible to do anything really trustworthy with GROUP BY conditions
@@ -3205,11 +3207,13 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
  * but we don't have the info to do better).
  */
 double
-estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
+estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
+					List **pgset)
 {
 	List	   *varinfos = NIL;
 	double		numdistinct;
 	ListCell   *l;
+	int			i;
 
 	/*
 	 * We don't ever want to return an estimate of zero groups, as that tends
@@ -3224,7 +3228,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 * for normal cases with GROUP BY or DISTINCT, but it is possible for
 	 * corner cases with set operations.)
 	 */
-	if (groupExprs == NIL)
+	if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))
 		return 1.0;
 
 	/*
@@ -3236,6 +3240,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 	 */
 	numdistinct = 1.0;
 
+	i = 0;
 	foreach(l, groupExprs)
 	{
 		Node	   *groupexpr = (Node *) lfirst(l);
@@ -3243,6 +3248,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows)
 		List	   *varshere;
 		ListCell   *l2;
 
+		/* is expression in this grouping set? */
+		if (pgset && !list_member_int(*pgset, i++))
+			continue;
+
 		/* Short-circuit for expressions returning boolean */
 		if (exprType(groupexpr) == BOOLOID)
 		{
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index b6a6da9..bb8a9b0 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 
 /*							yyyymmddN */
-#define CATALOG_VERSION_NO	201505151
+#define CATALOG_VERSION_NO	201505152
 
 #endif
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index c9f7223..4df44d0 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -83,6 +83,8 @@ extern void ExplainSeparatePlans(ExplainState *es);
 
 extern void ExplainPropertyList(const char *qlabel, List *data,
 					ExplainState *es);
+extern void ExplainPropertyListNested(const char *qlabel, List *data,
+					ExplainState *es);
 extern void ExplainPropertyText(const char *qlabel, const char *value,
 					ExplainState *es);
 extern void ExplainPropertyInteger(const char *qlabel, int value,
diff --git a/src/include/lib/bipartite_match.h b/src/include/lib/bipartite_match.h
new file mode 100644
index 0000000..c80f9bf
--- /dev/null
+++ b/src/include/lib/bipartite_match.h
@@ -0,0 +1,44 @@
+/*
+ * bipartite_match.h
+ *
+ * Copyright (c) 2015, PostgreSQL Global Development Group
+ *
+ * src/include/lib/bipartite_match.h
+ */
+#ifndef BIPARTITE_MATCH_H
+#define BIPARTITE_MATCH_H
+
+/*
+ * Given a bipartite graph consisting of nodes U numbered 1..nU, nodes V
+ * numbered 1..nV, and an adjacency map of undirected edges in the form
+ * adjacency[u] = [n, v1, v2, v3, ... vn], we wish to find a "maximum
+ * cardinality matching", which is defined as follows: a matching is a subset
+ * of the original edges such that no node has more than one edge, and a
+ * matching has maximum cardinality if there exists no other matching with a
+ * greater number of edges.
+ *
+ * This matching has various applications in graph theory, but the motivating
+ * example here is Dilworth's theorem: a partially-ordered set can be divided
+ * into the minimum number of chains (i.e. subsets X where x1 < x2 < x3 ...) by
+ * a bipartite graph construction. This gives us a polynomial-time solution to
+ * the problem of planning a collection of grouping sets with the provably
+ * minimal number of sort operations.
+ */
+typedef struct bipartite_match_state
+{
+	int			u_size;			/* size of U */
+	int			v_size;			/* size of V */
+	int			matching;		/* number of edges in matching */
+	short	  **adjacency;		/* adjacency[u] = [n, v1,v2,v3,...,vn] */
+	short	   *pair_uv;		/* pair_uv[u] -> v */
+	short	   *pair_vu;		/* pair_vu[v] -> u */
+
+	float	   *distance;		/* distance[u], float so we can have +inf */
+	short	   *queue;			/* queue storage for breadth search */
+} BipartiteMatchState;
+
+BipartiteMatchState *BipartiteMatch(int u_size, int v_size, short **adjacency);
+
+void BipartiteMatchFree(BipartiteMatchState *state);
+
+#endif   /* BIPARTITE_MATCH_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index fcfe110..c8c7e3e 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -23,6 +23,7 @@
 #include "utils/reltrigger.h"
 #include "utils/sortsupport.h"
 #include "utils/tuplestore.h"
+#include "utils/tuplesort.h"
 
 
 /* ----------------
@@ -615,6 +616,22 @@ typedef struct AggrefExprState
 } AggrefExprState;
 
 /* ----------------
+ *		GroupingFuncExprState node
+ *
+ * The list of column numbers refers to the input tuples of the Agg node to
+ * which the GroupingFunc belongs, and may contain 0 for references to columns
+ * that are only present in grouping sets processed by different Agg nodes (and
+ * which are therefore always considered "grouping" here).
+ * ----------------
+ */
+typedef struct GroupingFuncExprState
+{
+	ExprState	xprstate;
+	struct AggState *aggstate;
+	List	   *clauses;		/* integer list of column numbers */
+} GroupingFuncExprState;
+
+/* ----------------
  *		WindowFuncExprState node
  * ----------------
  */
@@ -1787,19 +1804,33 @@ typedef struct GroupState
 /* these structs are private in nodeAgg.c: */
 typedef struct AggStatePerAggData *AggStatePerAgg;
 typedef struct AggStatePerGroupData *AggStatePerGroup;
+typedef struct AggStatePerPhaseData *AggStatePerPhase;
 
 typedef struct AggState
 {
 	ScanState	ss;				/* its first field is NodeTag */
 	List	   *aggs;			/* all Aggref nodes in targetlist & quals */
 	int			numaggs;		/* length of list (could be zero!) */
-	FmgrInfo   *eqfunctions;	/* per-grouping-field equality fns */
+	AggStatePerPhase phase;		/* pointer to current phase data */
+	int			numphases;		/* number of phases */
+	int			current_phase;	/* current phase number */
 	FmgrInfo   *hashfunctions;	/* per-grouping-field hash fns */
 	AggStatePerAgg peragg;		/* per-Aggref information */
-	MemoryContext aggcontext;	/* memory context for long-lived data */
+	ExprContext **aggcontexts;	/* econtexts for long-lived data (per GS) */
 	ExprContext *tmpcontext;	/* econtext for input expressions */
 	AggStatePerAgg curperagg;	/* identifies currently active aggregate */
+	bool        input_done;     /* indicates end of input */
 	bool		agg_done;		/* indicates completion of Agg scan */
+	int			projected_set;	/* The last projected grouping set */
+	int			current_set;	/* The current grouping set being evaluated */
+	Bitmapset  *grouped_cols;	/* grouped cols in current projection */
+	List	   *all_grouped_cols; /* list of all grouped cols in DESC order */
+	/* These fields are for grouping set phase data */
+	int			maxsets;		/* The max number of sets in any phase */
+	AggStatePerPhase phases;	/* array of all phases */
+	Tuplesortstate *sort_in;	/* sorted input to phases > 0 */
+	Tuplesortstate *sort_out;	/* input is copied here for next phase */
+	TupleTableSlot *sort_slot;	/* slot for sort results */
 	/* these fields are used in AGG_PLAIN and AGG_SORTED modes: */
 	AggStatePerGroup pergroup;	/* per-Aggref-per-group working state */
 	HeapTuple	grp_firstTuple; /* copy of first tuple of current group */
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 4dff6a0..01d9fed 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -81,4 +81,6 @@ extern DefElem *makeDefElem(char *name, Node *arg);
 extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,
 					DefElemAction defaction);
 
+extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location);
+
 #endif   /* MAKEFUNC_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 768f413..159b16f 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -134,6 +134,7 @@ typedef enum NodeTag
 	T_Const,
 	T_Param,
 	T_Aggref,
+	T_GroupingFunc,
 	T_WindowFunc,
 	T_ArrayRef,
 	T_FuncExpr,
@@ -186,6 +187,7 @@ typedef enum NodeTag
 	T_GenericExprState,
 	T_WholeRowVarExprState,
 	T_AggrefExprState,
+	T_GroupingFuncExprState,
 	T_WindowFuncExprState,
 	T_ArrayRefExprState,
 	T_FuncExprState,
@@ -404,6 +406,7 @@ typedef enum NodeTag
 	T_RangeTblFunction,
 	T_WithCheckOption,
 	T_SortGroupClause,
+	T_GroupingSet,
 	T_WindowClause,
 	T_PrivGrantee,
 	T_FuncWithArgs,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 053f1b0..7ba75c5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -138,6 +138,8 @@ typedef struct Query
 
 	List	   *groupClause;	/* a list of SortGroupClause's */
 
+	List	   *groupingSets;	/* a list of GroupingSet's if present */
+
 	Node	   *havingQual;		/* qualifications applied to groups */
 
 	List	   *windowClause;	/* a list of WindowClause's */
@@ -965,6 +967,73 @@ typedef struct SortGroupClause
 } SortGroupClause;
 
 /*
+ * GroupingSet -
+ *		representation of CUBE, ROLLUP and GROUPING SETS clauses
+ *
+ * In a Query with grouping sets, the groupClause contains a flat list of
+ * SortGroupClause nodes for each distinct expression used.  The actual
+ * structure of the GROUP BY clause is given by the groupingSets tree.
+ *
+ * In the raw parser output, GroupingSet nodes (of all types except SIMPLE
+ * which is not used) are potentially mixed in with the expressions in the
+ * groupClause of the SelectStmt.  (An expression can't contain a GroupingSet,
+ * but a list may mix GroupingSet and expression nodes.)  At this stage, the
+ * content of each node is a list of expressions, some of which may be RowExprs
+ * which represent sublists rather than actual row constructors, and nested
+ * GroupingSet nodes where legal in the grammar.  The structure directly
+ * reflects the query syntax.
+ *
+ * In parse analysis, the transformed expressions are used to build the tlist
+ * and groupClause list (of SortGroupClause nodes), and the groupingSets tree
+ * is eventually reduced to a fixed format:
+ *
+ * EMPTY nodes represent (), and obviously have no content
+ *
+ * SIMPLE nodes represent a list of one or more expressions to be treated as an
+ * atom by the enclosing structure; the content is an integer list of
+ * ressortgroupref values (see SortGroupClause)
+ *
+ * CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
+ *
+ * SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
+ * parse analysis they cannot contain more SETS nodes; enough of the syntactic
+ * transforms of the spec have been applied that we no longer have arbitrarily
+ * deep nesting (though we still preserve the use of cube/rollup).
+ *
+ * Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
+ * nodes at the leaves), then the groupClause will be empty, but this is still
+ * an aggregation query (similar to using aggs or HAVING without GROUP BY).
+ *
+ * As an example, the following clause:
+ *
+ * GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
+ *
+ * looks like this after raw parsing:
+ *
+ * SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
+ *
+ * and parse analysis converts it to:
+ *
+ * SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
+ */
+typedef enum
+{
+	GROUPING_SET_EMPTY,
+	GROUPING_SET_SIMPLE,
+	GROUPING_SET_ROLLUP,
+	GROUPING_SET_CUBE,
+	GROUPING_SET_SETS
+} GroupingSetKind;
+
+typedef struct GroupingSet
+{
+	NodeTag		type;
+	GroupingSetKind kind;
+	List	   *content;
+	int			location;
+} GroupingSet;
+
+/*
  * WindowClause -
  *		transformed representation of WINDOW and OVER clauses
  *
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index a175000..729456d 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -229,8 +229,9 @@ extern List *list_union_int(const List *list1, const List *list2);
 extern List *list_union_oid(const List *list1, const List *list2);
 
 extern List *list_intersection(const List *list1, const List *list2);
+extern List *list_intersection_int(const List *list1, const List *list2);
 
-/* currently, there's no need for list_intersection_int etc */
+/* currently, there's no need for list_intersection_ptr etc */
 
 extern List *list_difference(const List *list1, const List *list2);
 extern List *list_difference_ptr(const List *list1, const List *list2);
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 65f71d8..6ee81b5 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -706,6 +706,8 @@ typedef struct Agg
 	AttrNumber *grpColIdx;		/* their indexes in the target list */
 	Oid		   *grpOperators;	/* equality operators to compare with */
 	long		numGroups;		/* estimated number of groups in input */
+	List	   *groupingSets;	/* grouping sets to use */
+	List	   *chain;			/* chained Agg/Sort nodes */
 } Agg;
 
 /* ----------------
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 4a4dd7e..a5467c5 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -272,6 +272,41 @@ typedef struct Aggref
 } Aggref;
 
 /*
+ * GroupingFunc
+ *
+ * A GroupingFunc is a GROUPING(...) expression, which behaves in many ways
+ * like an aggregate function (e.g. it "belongs" to a specific query level,
+ * which might not be the one immediately containing it), but also differs in
+ * an important respect: it never evaluates its arguments, they merely
+ * designate expressions from the GROUP BY clause of the query level to which
+ * it belongs.
+ *
+ * The spec defines the evaluation of GROUPING() purely by syntactic
+ * replacement, but we make it a real expression for optimization purposes so
+ * that one Agg node can handle multiple grouping sets at once.  Evaluating the
+ * result only needs the column positions to check against the grouping set
+ * being projected.  However, for EXPLAIN to produce meaningful output, we have
+ * to keep the original expressions around, since expression deparse does not
+ * give us any feasible way to get at the GROUP BY clause.
+ *
+ * Also, we treat two GroupingFunc nodes as equal if they have equal arguments
+ * lists and agglevelsup, without comparing the refs and cols annotations.
+ *
+ * In raw parse output we have only the args list; parse analysis fills in the
+ * refs list, and the planner fills in the cols list.
+ */
+typedef struct GroupingFunc
+{
+	Expr		xpr;
+	List	   *args;			/* arguments, not evaluated but kept for
+								 * benefit of EXPLAIN etc. */
+	List	   *refs;			/* ressortgrouprefs of arguments */
+	List	   *cols;			/* actual column positions set by planner */
+	Index		agglevelsup;	/* same as Aggref.agglevelsup */
+	int			location;		/* token location */
+} GroupingFunc;
+
+/*
  * WindowFunc
  */
 typedef struct WindowFunc
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index d3ee61c..279051e 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -260,6 +260,9 @@ typedef struct PlannerInfo
 
 	/* optional private data for join_search_hook, e.g., GEQO */
 	void	   *join_search_private;
+
+	/* for GroupingFunc fixup in setrefs */
+	AttrNumber *grouping_map;
 } PlannerInfo;
 
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index da15fca..52b077a 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -59,6 +59,7 @@ extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls,
 extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual,
 		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
 		 int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators,
+		 List *groupingSets,
 		 long numGroups,
 		 Plan *lefttree);
 extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 3dc8bab..b0f0f19 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -43,6 +43,9 @@ extern Node *get_sortgroupclause_expr(SortGroupClause *sgClause,
 extern List *get_sortgrouplist_exprs(List *sgClauses,
 						List *targetList);
 
+extern SortGroupClause *get_sortgroupref_clause(Index sortref,
+					 List *clauses);
+
 extern Oid *extract_grouping_ops(List *groupClause);
 extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist);
 extern bool grouping_is_sortable(List *groupClause);
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index faea991..b30a1b1 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -99,6 +99,7 @@ PG_KEYWORD("cost", COST, UNRESERVED_KEYWORD)
 PG_KEYWORD("create", CREATE, RESERVED_KEYWORD)
 PG_KEYWORD("cross", CROSS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("csv", CSV, UNRESERVED_KEYWORD)
+PG_KEYWORD("cube", CUBE, UNRESERVED_KEYWORD)
 PG_KEYWORD("current", CURRENT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("current_catalog", CURRENT_CATALOG, RESERVED_KEYWORD)
 PG_KEYWORD("current_date", CURRENT_DATE, RESERVED_KEYWORD)
@@ -174,6 +175,7 @@ PG_KEYWORD("grant", GRANT, RESERVED_KEYWORD)
 PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD)
 PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD)
 PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD)
+PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD)
 PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD)
 PG_KEYWORD("having", HAVING, RESERVED_KEYWORD)
 PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD)
@@ -325,6 +327,7 @@ PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
 PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD)
 PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD)
+PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD)
 PG_KEYWORD("row", ROW, COL_NAME_KEYWORD)
 PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD)
@@ -343,6 +346,7 @@ PG_KEYWORD("session", SESSION, UNRESERVED_KEYWORD)
 PG_KEYWORD("session_user", SESSION_USER, RESERVED_KEYWORD)
 PG_KEYWORD("set", SET, UNRESERVED_KEYWORD)
 PG_KEYWORD("setof", SETOF, COL_NAME_KEYWORD)
+PG_KEYWORD("sets", SETS, UNRESERVED_KEYWORD)
 PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h
index 91a0706..6a5f9bb 100644
--- a/src/include/parser/parse_agg.h
+++ b/src/include/parser/parse_agg.h
@@ -18,11 +18,16 @@
 extern void transformAggregateCall(ParseState *pstate, Aggref *agg,
 					   List *args, List *aggorder,
 					   bool agg_distinct);
+
+extern Node *transformGroupingFunc(ParseState *pstate, GroupingFunc *g);
+
 extern void transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc,
 						WindowDef *windef);
 
 extern void parseCheckAggregates(ParseState *pstate, Query *qry);
 
+extern List *expand_grouping_sets(List *groupingSets, int limit);
+
 extern int	get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes);
 
 extern Oid resolve_aggregate_transtype(Oid aggfuncid,
diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h
index f1b7d3d..cbe5e76 100644
--- a/src/include/parser/parse_clause.h
+++ b/src/include/parser/parse_clause.h
@@ -27,6 +27,7 @@ extern Node *transformWhereClause(ParseState *pstate, Node *clause,
 extern Node *transformLimitClause(ParseState *pstate, Node *clause,
 					 ParseExprKind exprKind, const char *constructName);
 extern List *transformGroupClause(ParseState *pstate, List *grouplist,
+								  List **groupingSets,
 					 List **targetlist, List *sortClause,
 					 ParseExprKind exprKind, bool useSQL99);
 extern List *transformSortClause(ParseState *pstate, List *orderlist,
diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h
index bf69f2a..fdca713 100644
--- a/src/include/utils/selfuncs.h
+++ b/src/include/utils/selfuncs.h
@@ -185,7 +185,7 @@ extern void mergejoinscansel(PlannerInfo *root, Node *clause,
 				 Selectivity *rightstart, Selectivity *rightend);
 
 extern double estimate_num_groups(PlannerInfo *root, List *groupExprs,
-					double input_rows);
+								  double input_rows, List **pgset);
 
 extern Selectivity estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey,
 						 double nbuckets);
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
new file mode 100644
index 0000000..842c2ae
--- /dev/null
+++ b/src/test/regress/expected/groupingsets.out
@@ -0,0 +1,590 @@
+--
+-- grouping sets
+--
+-- test data sources
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+create temp table gstest3 (a integer, b integer, c integer, d integer);
+copy gstest3 from stdin;
+alter table gstest3 add primary key (a);
+create temp table gstest_empty (a integer, b integer, v integer);
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+-- basic functionality
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 1 |   |        1 |  60 |     5 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 2 |   |        1 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 3 |   |        1 |  33 |     2 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 |   |        1 |  60 |     5 |  14
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 4 |   |        1 |  37 |     2 |  19
+   |   |        3 | 145 |    10 |  19
+ 3 | 4 |        0 |  17 |     1 |  17
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 1 |        0 |  21 |     2 |  11
+ 4 | 1 |        0 |  37 |     2 |  19
+(12 rows)
+
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+   |   |        3 | 145 |    10 |  19
+ 1 |   |        1 |  60 |     5 |  14
+ 1 | 1 |        0 |  21 |     2 |  11
+ 2 |   |        1 |  15 |     1 |  15
+ 3 |   |        1 |  33 |     2 |  17
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 4 |   |        1 |  37 |     2 |  19
+ 4 | 1 |        0 |  37 |     2 |  19
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+(12 rows)
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+ a | b | grouping |            array_agg            |          string_agg           | percentile_disc | rank 
+---+---+----------+---------------------------------+-------------------------------+-----------------+------
+ 1 | 1 |        0 | {10,11}                         | 11:10                         |              10 |    3
+ 1 | 2 |        0 | {12,13}                         | 13:12                         |              12 |    1
+ 1 | 3 |        0 | {14}                            | 14                            |              14 |    1
+ 1 |   |        1 | {10,11,12,13,14}                | 14:13:12:11:10                |              12 |    3
+ 2 | 3 |        0 | {15}                            | 15                            |              15 |    1
+ 2 |   |        1 | {15}                            | 15                            |              15 |    1
+ 3 | 3 |        0 | {16}                            | 16                            |              16 |    1
+ 3 | 4 |        0 | {17}                            | 17                            |              17 |    1
+ 3 |   |        1 | {16,17}                         | 17:16                         |              16 |    1
+ 4 | 1 |        0 | {18,19}                         | 19:18                         |              18 |    1
+ 4 |   |        1 | {18,19}                         | 19:18                         |              18 |    1
+   |   |        3 | {10,11,12,13,14,15,16,17,18,19} | 19:18:17:16:15:14:13:12:11:10 |              14 |    3
+(12 rows)
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+ grouping | a |  array_agg  | rank | rank 
+----------+---+-------------+------+------
+        0 | 1 | {1,4,5}     |    1 |    1
+        0 | 3 | {1,2}       |    3 |    3
+        1 |   | {1,4,5,1,2} |    1 |    6
+(3 rows)
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   |   |  12 |   36
+(6 rows)
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+(0 rows)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+   |   |     |     0
+   |   |     |     0
+(3 rows)
+
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+ sum | count 
+-----+-------
+     |     0
+     |     0
+     |     0
+(3 rows)
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+ a | b | sum | count 
+---+---+-----+-------
+   |   |     |     0
+(1 row)
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum  | max 
+---+---+----------+------+-----
+ 1 | 1 |        0 |  420 |   1
+ 1 | 2 |        0 |  120 |   2
+ 2 | 1 |        0 |  105 |   1
+ 2 | 2 |        0 |   30 |   2
+ 3 | 1 |        0 |  231 |   1
+ 3 | 2 |        0 |   66 |   2
+ 4 | 1 |        0 |  259 |   1
+ 4 | 2 |        0 |   74 |   2
+   |   |        3 | 1305 |   2
+(9 rows)
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 420 |   1
+ 1 | 2 |        0 |  60 |   1
+ 2 | 2 |        0 |  15 |   2
+   |   |        3 | 495 |   2
+(4 rows)
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+ a | b | grouping | sum | max 
+---+---+----------+-----+-----
+ 1 | 1 |        0 | 147 |   2
+ 1 | 2 |        0 |  25 |   2
+   |   |        3 | 172 |   2
+(3 rows)
+
+-- check that functionally dependent cols are not nulled
+select a, d, grouping(a,b,c)
+  from gstest3
+ group by grouping sets ((a,b), (a,c));
+ a | d | grouping 
+---+---+----------
+ 1 | 1 |        1
+ 2 | 2 |        1
+ 1 | 1 |        2
+ 2 | 2 |        2
+(4 rows)
+
+-- simple rescan tests
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   |   |   9
+(9 rows)
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+ERROR:  aggregate functions are not allowed in FROM clause of their own query level
+LINE 3:        lateral (select a, b, sum(v.x) from gstest_data(v.x) ...
+                                     ^
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+                         QUERY PLAN                         
+------------------------------------------------------------
+ Result
+   InitPlan 1 (returns $0)
+     ->  Limit
+           ->  Index Only Scan using tenk1_unique1 on tenk1
+                 Index Cond: (unique1 IS NOT NULL)
+(5 rows)
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+NOTICE:  view "gstest_view" will be a temporary view
+select pg_get_viewdef('gstest_view'::regclass, true);
+                                pg_get_viewdef                                 
+-------------------------------------------------------------------------------
+  SELECT gstest2.a,                                                           +
+     gstest2.b,                                                               +
+     GROUPING(gstest2.a, gstest2.b) AS "grouping",                            +
+     sum(gstest2.c) AS sum,                                                   +
+     count(*) AS count,                                                       +
+     max(gstest2.c) AS max                                                    +
+    FROM gstest2                                                              +
+   GROUP BY ROLLUP((gstest2.a, gstest2.b, gstest2.c), (gstest2.c, gstest2.d));
+(1 row)
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        1
+        3
+(3 rows)
+
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+ a | b | c | d 
+---+---+---+---
+ 1 | 1 | 1 |  
+ 1 |   | 1 |  
+   |   | 1 |  
+ 1 | 1 | 2 |  
+ 1 | 2 | 2 |  
+ 1 |   | 2 |  
+ 2 | 2 | 2 |  
+ 2 |   | 2 |  
+   |   | 2 |  
+ 1 | 1 |   | 1
+ 1 |   |   | 1
+   |   |   | 1
+ 1 | 1 |   | 2
+ 1 | 2 |   | 2
+ 1 |   |   | 2
+ 2 | 2 |   | 2
+ 2 |   |   | 2
+   |   |   | 2
+(18 rows)
+
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+ a | b 
+---+---
+ 1 | 2
+ 2 | 3
+(2 rows)
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+ a | b | grouping | sum | count | max 
+---+---+----------+-----+-------+-----
+ 1 | 1 |        0 |  21 |     2 |  11
+ 1 | 2 |        0 |  25 |     2 |  13
+ 1 | 3 |        0 |  14 |     1 |  14
+ 2 | 3 |        0 |  15 |     1 |  15
+ 3 | 3 |        0 |  16 |     1 |  16
+ 3 | 4 |        0 |  17 |     1 |  17
+ 4 | 1 |        0 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+   |   |        3 |  21 |     2 |  11
+   |   |        3 |  25 |     2 |  13
+   |   |        3 |  14 |     1 |  14
+   |   |        3 |  15 |     1 |  15
+   |   |        3 |  16 |     1 |  16
+   |   |        3 |  17 |     1 |  17
+   |   |        3 |  37 |     2 |  19
+(21 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+ grouping 
+----------
+        0
+        0
+        0
+(3 rows)
+
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+ grouping 
+----------
+        0
+        0
+        0
+        0
+(4 rows)
+
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+ a | b | sum | rsum 
+---+---+-----+------
+ 1 | 1 |   8 |    8
+ 1 | 2 |   2 |   10
+ 1 |   |  10 |   20
+ 2 | 2 |   2 |   22
+ 2 |   |   2 |   24
+   | 1 |   8 |   32
+   | 2 |   4 |   36
+   |   |  12 |   48
+(8 rows)
+
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+ a | b | sum 
+---+---+-----
+ 1 | 1 |  21
+ 1 | 2 |  25
+ 1 | 3 |  14
+ 1 |   |  60
+ 2 | 3 |  15
+ 2 |   |  15
+ 3 | 3 |  16
+ 3 | 4 |  17
+ 3 |   |  33
+ 4 | 1 |  37
+ 4 |   |  37
+   |   | 145
+(12 rows)
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+ a | b | sum 
+---+---+-----
+ 1 | 1 |   1
+ 1 | 2 |   1
+ 1 | 3 |   1
+ 1 |   |   3
+ 2 | 1 |   2
+ 2 | 2 |   2
+ 2 | 3 |   2
+ 2 |   |   6
+   | 1 |   3
+   | 2 |   3
+   | 3 |   3
+   |   |   9
+(12 rows)
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+ERROR:  arguments to GROUPING must be grouping expressions of the associated query level
+LINE 1: select (select grouping(a,b) from gstest2) from gstest2 grou...
+                                ^
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+ a | b | sum | count 
+---+---+-----+-------
+ 1 | 1 |   8 |     7
+ 1 | 2 |   2 |     1
+ 1 |   |  10 |     8
+ 1 |   |  10 |     8
+ 2 | 2 |   2 |     1
+ 2 |   |   2 |     1
+ 2 |   |   2 |     1
+   |   |  12 |     9
+(8 rows)
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+ ten | sum 
+-----+-----
+   0 |   0
+   0 |   2
+   0 |   2
+   1 |   1
+   1 |   3
+   2 |   0
+   2 |   2
+   2 |   2
+   3 |   1
+   3 |   3
+   4 |   0
+   4 |   2
+   4 |   2
+   5 |   1
+   5 |   3
+   6 |   0
+   6 |   2
+   6 |   2
+   7 |   1
+   7 |   3
+   8 |   0
+   8 |   2
+   8 |   2
+   9 |   1
+   9 |   3
+(25 rows)
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+ ten | sum 
+-----+-----
+   0 |    
+   1 |    
+   2 |    
+   3 |    
+   4 |    
+   5 |    
+   6 |    
+   7 |    
+   8 |    
+   9 |    
+     |    
+(11 rows)
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+ a | a | four | ten | count 
+---+---+------+-----+-------
+ 1 | 1 |    0 |   0 |    50
+ 1 | 1 |    0 |   2 |    50
+ 1 | 1 |    0 |   4 |    50
+ 1 | 1 |    0 |   6 |    50
+ 1 | 1 |    0 |   8 |    50
+ 1 | 1 |    0 |     |   250
+ 1 | 1 |    1 |   1 |    50
+ 1 | 1 |    1 |   3 |    50
+ 1 | 1 |    1 |   5 |    50
+ 1 | 1 |    1 |   7 |    50
+ 1 | 1 |    1 |   9 |    50
+ 1 | 1 |    1 |     |   250
+ 1 | 1 |    2 |   0 |    50
+ 1 | 1 |    2 |   2 |    50
+ 1 | 1 |    2 |   4 |    50
+ 1 | 1 |    2 |   6 |    50
+ 1 | 1 |    2 |   8 |    50
+ 1 | 1 |    2 |     |   250
+ 1 | 1 |    3 |   1 |    50
+ 1 | 1 |    3 |   3 |    50
+ 1 | 1 |    3 |   5 |    50
+ 1 | 1 |    3 |   7 |    50
+ 1 | 1 |    3 |   9 |    50
+ 1 | 1 |    3 |     |   250
+ 1 | 1 |      |   0 |   100
+ 1 | 1 |      |   1 |   100
+ 1 | 1 |      |   2 |   100
+ 1 | 1 |      |   3 |   100
+ 1 | 1 |      |   4 |   100
+ 1 | 1 |      |   5 |   100
+ 1 | 1 |      |   6 |   100
+ 1 | 1 |      |   7 |   100
+ 1 | 1 |      |   8 |   100
+ 1 | 1 |      |   9 |   100
+ 1 | 1 |      |     |  1000
+ 2 | 2 |    0 |   0 |    50
+ 2 | 2 |    0 |   2 |    50
+ 2 | 2 |    0 |   4 |    50
+ 2 | 2 |    0 |   6 |    50
+ 2 | 2 |    0 |   8 |    50
+ 2 | 2 |    0 |     |   250
+ 2 | 2 |    1 |   1 |    50
+ 2 | 2 |    1 |   3 |    50
+ 2 | 2 |    1 |   5 |    50
+ 2 | 2 |    1 |   7 |    50
+ 2 | 2 |    1 |   9 |    50
+ 2 | 2 |    1 |     |   250
+ 2 | 2 |    2 |   0 |    50
+ 2 | 2 |    2 |   2 |    50
+ 2 | 2 |    2 |   4 |    50
+ 2 | 2 |    2 |   6 |    50
+ 2 | 2 |    2 |   8 |    50
+ 2 | 2 |    2 |     |   250
+ 2 | 2 |    3 |   1 |    50
+ 2 | 2 |    3 |   3 |    50
+ 2 | 2 |    3 |   5 |    50
+ 2 | 2 |    3 |   7 |    50
+ 2 | 2 |    3 |   9 |    50
+ 2 | 2 |    3 |     |   250
+ 2 | 2 |      |   0 |   100
+ 2 | 2 |      |   1 |   100
+ 2 | 2 |      |   2 |   100
+ 2 | 2 |      |   3 |   100
+ 2 | 2 |      |   4 |   100
+ 2 | 2 |      |   5 |   100
+ 2 | 2 |      |   6 |   100
+ 2 | 2 |      |   7 |   100
+ 2 | 2 |      |   8 |   100
+ 2 | 2 |      |   9 |   100
+ 2 | 2 |      |     |  1000
+(70 rows)
+
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+                                                                        array                                                                         
+------------------------------------------------------------------------------------------------------------------------------------------------------
+ {"(1,0,0,250)","(1,0,2,250)","(1,0,,500)","(1,1,1,250)","(1,1,3,250)","(1,1,,500)","(1,,0,250)","(1,,1,250)","(1,,2,250)","(1,,3,250)","(1,,,1000)"}
+ {"(2,0,0,250)","(2,0,2,250)","(2,0,,500)","(2,1,1,250)","(2,1,3,250)","(2,1,,500)","(2,,0,250)","(2,,1,250)","(2,,2,250)","(2,,3,250)","(2,,,1000)"}
+(2 rows)
+
+-- end
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index b0ebb6b..69a8130 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -84,7 +84,7 @@ test: select_into select_distinct select_distinct_on select_implicit select_havi
 # ----------
 # Another group of parallel tests
 # ----------
-test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address
+test: brin gin gist spgist privileges security_label collate matview lock replica_identity rowsecurity object_address groupingsets
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 8409c0f..d1c3749 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -86,6 +86,7 @@ test: union
 test: case
 test: join
 test: aggregates
+test: groupingsets
 test: transactions
 ignore: random
 test: random
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
new file mode 100644
index 0000000..0bffb85
--- /dev/null
+++ b/src/test/regress/sql/groupingsets.sql
@@ -0,0 +1,165 @@
+--
+-- grouping sets
+--
+
+-- test data sources
+
+create temp view gstest1(a,b,v)
+  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
+            (2,3,15),
+            (3,3,16),(3,4,17),
+            (4,1,18),(4,1,19);
+
+create temp table gstest2 (a integer, b integer, c integer, d integer,
+                           e integer, f integer, g integer, h integer);
+copy gstest2 from stdin;
+1	1	1	1	1	1	1	1
+1	1	1	1	1	1	1	2
+1	1	1	1	1	1	2	2
+1	1	1	1	1	2	2	2
+1	1	1	1	2	2	2	2
+1	1	1	2	2	2	2	2
+1	1	2	2	2	2	2	2
+1	2	2	2	2	2	2	2
+2	2	2	2	2	2	2	2
+\.
+
+create temp table gstest3 (a integer, b integer, c integer, d integer);
+copy gstest3 from stdin;
+1	1	1	1
+2	2	2	2
+\.
+alter table gstest3 add primary key (a);
+
+create temp table gstest_empty (a integer, b integer, v integer);
+
+create function gstest_data(v integer, out a integer, out b integer)
+  returns setof record
+  as $f$
+    begin
+      return query select v, i from generate_series(1,3) i;
+    end;
+  $f$ language plpgsql;
+
+-- basic functionality
+
+-- simple rollup with multiple plain aggregates, with and without ordering
+-- (and with ordering differing from grouping)
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b);
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by a,b;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by b desc, a;
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by rollup (a,b) order by coalesce(a,0)+coalesce(b,0);
+
+-- various types of ordered aggs
+select a, b, grouping(a,b),
+       array_agg(v order by v),
+       string_agg(v::text, ':' order by v desc),
+       percentile_disc(0.5) within group (order by v),
+       rank(1,2,12) within group (order by a,b,v)
+  from gstest1 group by rollup (a,b) order by a,b;
+
+-- test usage of grouped columns in direct args of aggs
+select grouping(a), a, array_agg(b),
+       rank(a) within group (order by b nulls first),
+       rank(a) within group (order by b nulls last)
+  from (values (1,1),(1,4),(1,5),(3,1),(3,2)) v(a,b)
+ group by rollup (a) order by a;
+
+-- nesting with window functions
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by rollup (a,b) order by rsum, a, b;
+
+-- empty input: first is 0 rows, second 1, third 3 etc.
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
+select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
+select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
+
+-- empty input with joins tests some important code paths
+select t1.a, t2.b, sum(t1.v), count(*) from gstest_empty t1, gstest_empty t2
+ group by grouping sets ((t1.a,t2.b),());
+
+-- simple joins, var resolution, GROUPING on join vars
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1, gstest2 t2
+ group by grouping sets ((t1.a, t2.b), ());
+
+select t1.a, t2.b, grouping(t1.a, t2.b), sum(t1.v), max(t2.a)
+  from gstest1 t1 join gstest2 t2 on (t1.a=t2.a)
+ group by grouping sets ((t1.a, t2.b), ());
+
+select a, b, grouping(a, b), sum(t1.v), max(t2.c)
+  from gstest1 t1 join gstest2 t2 using (a,b)
+ group by grouping sets ((a, b), ());
+
+-- check that functionally dependent cols are not nulled
+select a, d, grouping(a,b,c)
+  from gstest3
+ group by grouping sets ((a,b), (a,c));
+
+-- simple rescan tests
+
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by rollup (a,b);
+
+select *
+  from (values (1),(2)) v(x),
+       lateral (select a, b, sum(v.x) from gstest_data(v.x) group by rollup (a,b)) s;
+
+-- min max optimisation should still work with GROUP BY ()
+explain (costs off)
+  select min(unique1) from tenk1 GROUP BY ();
+
+-- Views with GROUPING SET queries
+CREATE VIEW gstest_view AS select a, b, grouping(a,b), sum(c), count(*), max(c)
+  from gstest2 group by rollup ((a,b,c),(c,d));
+
+select pg_get_viewdef('gstest_view'::regclass, true);
+
+-- Nested queries with 3 or more levels of nesting
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(e,f) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+select(select (select grouping(c) from (values (1)) v2(c) GROUP BY c) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP(e,f);
+
+-- Combinations of operations
+select a, b, c, d from gstest2 group by rollup(a,b),grouping sets(c,d);
+select a, b from (values (1,2),(2,3)) v(a,b) group by a,b, grouping sets(a);
+
+-- Tests for chained aggregates
+select a, b, grouping(a,b), sum(v), count(*), max(v)
+  from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY ROLLUP((e+1),(f+1));
+select(select (select grouping(a,b) from (values (1)) v2(c)) from (values (1,2)) v1(a,b) group by (a,b)) from (values(6,7)) v3(e,f) GROUP BY CUBE((e+1),(f+1)) ORDER BY (e+1),(f+1);
+select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
+  from gstest2 group by cube (a,b) order by rsum, a, b;
+select a, b, sum(c) from (values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),(2,3,15),(3,3,16),(3,4,17),(4,1,18),(4,1,19)) v(a,b,c) group by rollup (a,b);
+select a, b, sum(v.x)
+  from (values (1),(2)) v(x), gstest_data(v.x)
+ group by cube (a,b) order by a,b;
+
+
+-- Agg level check. This query should error out.
+select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
+
+--Nested queries
+select a, b, sum(c), count(*) from gstest2 group by grouping sets (rollup(a,b),a);
+
+-- HAVING queries
+select ten, sum(distinct four) from onek a
+group by grouping sets((ten,four),(ten))
+having exists (select 1 from onek b where sum(distinct a.four) = b.four);
+
+-- FILTER queries
+select ten, sum(distinct four) filter (where four::text ~ '123') from onek a
+group by rollup(ten);
+
+-- More rescan tests
+select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
+select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
+
+-- end
-- 
2.4.0.rc2.1.g3d6bc9a

#136Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#135)
Re: Final Patch for GROUPING SETS

On 2015-05-16 00:06:12 +0200, Andres Freund wrote:

Andrew (and I) have been working on this since. Here's the updated and
rebased patch.

It misses a decent commit message and another beautification
readthrough. I've spent the last hour going through the thing again and
all I hit was a disturbing number of newline "errors" and two minor
comment additions.

And committed. Thanks Andrew, everyone.

Despite some unhappiness all around I do think the patch has improved
due to the discussions in this thread.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers