execGrouping.c limit on work_mem

Started by Jeff Janesover 8 years ago2 messages
#1Jeff Janes
jeff.janes@gmail.com

In BuildTupleHashTable

/* Limit initial table size request to not more than work_mem */
nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) / entrysize));

Is this a good idea? If the caller of this code has no respect for
work_mem, they are still going to blow it out of the water. Now we will
just do a bunch of hash-table splitting in the process. That is only going
to add to the pain.

Also:

* false if it existed already. ->additional_data in the new entry has

The field is just ->additional, not ->additional_data

Cheers,

Jeff

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Janes (#1)
Re: execGrouping.c limit on work_mem

Jeff Janes <jeff.janes@gmail.com> writes:

In BuildTupleHashTable
/* Limit initial table size request to not more than work_mem */
nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) / entrysize));

Is this a good idea? If the caller of this code has no respect for
work_mem, they are still going to blow it out of the water. Now we will
just do a bunch of hash-table splitting in the process. That is only going
to add to the pain.

It looks perfectly reasonable to me. The point I think is that the caller
doesn't have to be very careful about calculating its initial request
size.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers