Inefficiency in parallel pg_restore with many tables

Started by Tom Lanealmost 3 years ago37 messageshackers
Jump to latest
#1Tom Lane
tgl@sss.pgh.pa.us

I looked into the performance gripe at [1]/messages/by-id/CAEzn=HSPXi6OS-5KzGMcZeKzWKOOX1me2u2eCiGtMEZDz9Fqdg@mail.gmail.com about pg_restore not making
effective use of parallel workers when there are a lot of tables.
I was able to reproduce that by dumping and parallel restoring 100K
tables made according to this script:

do $$
begin
for i in 1..100000 loop
execute format('create table t%s (f1 int unique, f2 int unique);', i);
execute format('insert into t%s select x, x from generate_series(1,1000) x',
i);
if i % 100 = 0 then commit; end if;
end loop;
end
$$;

Once pg_restore reaches the parallelizable part of the restore, what
I see is that the parent pg_restore process goes to 100% CPU while its
children (and the server) mostly sit idle; that is, the task dispatch
logic in pg_backup_archiver.c is unable to dispatch tasks fast enough
to keep the children busy. A quick perf check showed most of the time
being eaten by pg_qsort and TocEntrySizeCompare.

What I believe is happening is that we start the parallel restore phase
with 100K TableData items that are ready to go (they are in the
ready_list) and 200K AddConstraint items that are pending, because
we make those have dependencies on the corresponding TableData so we
don't build an index until after its table is populated. Each time
one of the TableData items is completed by some worker, the two
AddConstraint items for its table are moved from the pending_list
to the ready_list --- and that means ready_list_insert marks the
ready_list as no longer sorted. When we go to pop the next task
from the ready_list, we re-sort that entire list first. So
we spend something like O(N^2 * log(N)) time just sorting, if
there are N tables. Clearly, this code is much less bright
than it thinks it is (and that's all my fault, if memory serves).

I'm not sure how big a deal this is in practice: in most situations
the individual jobs are larger than they are in this toy example,
plus the initial non-parallelizable part of the restore is a bigger
bottleneck anyway with this many tables. Still, we do have one
real-world complaint, so maybe we should look into improving it.

I wonder if we could replace the sorted ready-list with a priority heap,
although that might be complicated by the fact that pop_next_work_item
has to be capable of popping something that's not necessarily the
largest remaining job. Another idea could be to be a little less eager
to sort the list every time; I think in practice scheduling wouldn't
get much worse if we only re-sorted every so often.

I don't have time to pursue this right now, but perhaps someone
else would like to.

regards, tom lane

[1]: /messages/by-id/CAEzn=HSPXi6OS-5KzGMcZeKzWKOOX1me2u2eCiGtMEZDz9Fqdg@mail.gmail.com

#2Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#1)
Re: Inefficiency in parallel pg_restore with many tables

Hi,

On 2023-07-15 13:47:12 -0400, Tom Lane wrote:

I wonder if we could replace the sorted ready-list with a priority heap,
although that might be complicated by the fact that pop_next_work_item
has to be capable of popping something that's not necessarily the
largest remaining job. Another idea could be to be a little less eager
to sort the list every time; I think in practice scheduling wouldn't
get much worse if we only re-sorted every so often.

Perhaps we could keep track of where the newly inserted items are, and use
insertion sort or such when the number of new elements is much smaller than
the size of the already sorted elements?

As you say, a straight priority heap might not be easy. But we could just open
code using two sorted arrays, one large, one for recent additions that needs
to be newly sorted. And occasionally merge the small array into the big array,
once it has gotten large enough that sorting becomes expensive. We could go
for a heap of N>2 such arrays, but I doubt it would be worth much.

Greetings,

Andres Freund

#3Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#1)
Re: Inefficiency in parallel pg_restore with many tables

On 2023-07-15 Sa 13:47, Tom Lane wrote:

I looked into the performance gripe at [1] about pg_restore not making
effective use of parallel workers when there are a lot of tables.
I was able to reproduce that by dumping and parallel restoring 100K
tables made according to this script:

do $$
begin
for i in 1..100000 loop
execute format('create table t%s (f1 int unique, f2 int unique);', i);
execute format('insert into t%s select x, x from generate_series(1,1000) x',
i);
if i % 100 = 0 then commit; end if;
end loop;
end
$$;

Once pg_restore reaches the parallelizable part of the restore, what
I see is that the parent pg_restore process goes to 100% CPU while its
children (and the server) mostly sit idle; that is, the task dispatch
logic in pg_backup_archiver.c is unable to dispatch tasks fast enough
to keep the children busy. A quick perf check showed most of the time
being eaten by pg_qsort and TocEntrySizeCompare.

What I believe is happening is that we start the parallel restore phase
with 100K TableData items that are ready to go (they are in the
ready_list) and 200K AddConstraint items that are pending, because
we make those have dependencies on the corresponding TableData so we
don't build an index until after its table is populated. Each time
one of the TableData items is completed by some worker, the two
AddConstraint items for its table are moved from the pending_list
to the ready_list --- and that means ready_list_insert marks the
ready_list as no longer sorted. When we go to pop the next task
from the ready_list, we re-sort that entire list first. So
we spend something like O(N^2 * log(N)) time just sorting, if
there are N tables. Clearly, this code is much less bright
than it thinks it is (and that's all my fault, if memory serves).

I'm not sure how big a deal this is in practice: in most situations
the individual jobs are larger than they are in this toy example,
plus the initial non-parallelizable part of the restore is a bigger
bottleneck anyway with this many tables. Still, we do have one
real-world complaint, so maybe we should look into improving it.

I wonder if we could replace the sorted ready-list with a priority heap,
although that might be complicated by the fact that pop_next_work_item
has to be capable of popping something that's not necessarily the
largest remaining job. Another idea could be to be a little less eager
to sort the list every time; I think in practice scheduling wouldn't
get much worse if we only re-sorted every so often.

Yeah, I think that last idea is reasonable. Something like if the number
added since the last sort is more than min(50, list_length/4) then sort.
That shouldn't be too invasive.

cheers

andrew

--
Andrew Dunstan
EDB:https://www.enterprisedb.com

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#3)
Re: Inefficiency in parallel pg_restore with many tables

Andrew Dunstan <andrew@dunslane.net> writes:

On 2023-07-15 Sa 13:47, Tom Lane wrote:

I wonder if we could replace the sorted ready-list with a priority heap,
although that might be complicated by the fact that pop_next_work_item
has to be capable of popping something that's not necessarily the
largest remaining job. Another idea could be to be a little less eager
to sort the list every time; I think in practice scheduling wouldn't
get much worse if we only re-sorted every so often.

Yeah, I think that last idea is reasonable. Something like if the number
added since the last sort is more than min(50, list_length/4) then sort.
That shouldn't be too invasive.

Actually, as long as we're talking about approximately-correct behavior:
let's make the ready_list be a priority heap, and then just make
pop_next_work_item scan forward from the array start until it finds an
item that's runnable per the lock heuristic. If the heap root is
blocked, the next things we'll examine will be its two children.
We might pick the lower-priority of those two, but it's still known to
be higher priority than at least 50% of the remaining heap entries, so
it shouldn't be too awful as a choice. The argument gets weaker the
further you go into the heap, but we're not expecting that having most
of the top entries blocked will be a common case. (Besides which, the
priorities are pretty crude to begin with.) Once selected, pulling out
an entry that is not the heap root is no problem: you just start the
sift-down process from there.

The main advantage of this over the only-sort-sometimes idea is that
we can guarantee that the largest ready item will always be dispatched
as soon as it can be (because it will be the heap root). So cases
involving one big table (with big indexes) and a lot of little ones
should get scheduled sanely, which is the main thing we want this
algorithm to ensure. With the other approach we can't really promise
much at all.

regards, tom lane

#5Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#4)
Re: Inefficiency in parallel pg_restore with many tables

On Sun, Jul 16, 2023 at 09:45:54AM -0400, Tom Lane wrote:

Actually, as long as we're talking about approximately-correct behavior:
let's make the ready_list be a priority heap, and then just make
pop_next_work_item scan forward from the array start until it finds an
item that's runnable per the lock heuristic. If the heap root is
blocked, the next things we'll examine will be its two children.
We might pick the lower-priority of those two, but it's still known to
be higher priority than at least 50% of the remaining heap entries, so
it shouldn't be too awful as a choice. The argument gets weaker the
further you go into the heap, but we're not expecting that having most
of the top entries blocked will be a common case. (Besides which, the
priorities are pretty crude to begin with.) Once selected, pulling out
an entry that is not the heap root is no problem: you just start the
sift-down process from there.

The main advantage of this over the only-sort-sometimes idea is that
we can guarantee that the largest ready item will always be dispatched
as soon as it can be (because it will be the heap root). So cases
involving one big table (with big indexes) and a lot of little ones
should get scheduled sanely, which is the main thing we want this
algorithm to ensure. With the other approach we can't really promise
much at all.

This seems worth a try. IIUC you are suggesting making binaryheap.c
frontend-friendly and expanding its API a bit. If no one has volunteered,
I could probably hack something together.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#6Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#5)
Re: Inefficiency in parallel pg_restore with many tables

On Sun, Jul 16, 2023 at 08:54:24PM -0700, Nathan Bossart wrote:

This seems worth a try. IIUC you are suggesting making binaryheap.c
frontend-friendly and expanding its API a bit. If no one has volunteered,
I could probably hack something together.

I spent some time on the binaryheap changes. I haven't had a chance to
plug it into the ready_list yet.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

expand_binaryheap_api.patchtext/x-diff; charset=us-asciiDownload+80-0
#7Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Nathan Bossart (#6)
Re: Inefficiency in parallel pg_restore with many tables

On 2023-Jul-17, Nathan Bossart wrote:

@@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)
binaryheap *heap;

sz = offsetof(binaryheap, bh_nodes) + sizeof(Datum) * capacity;
+#ifdef FRONTEND
+	heap = (binaryheap *) pg_malloc(sz);
+#else
heap = (binaryheap *) palloc(sz);
+#endif

Hmm, as I recall fe_memutils.c provides you with palloc() in the
frontend environment, so you don't actually need this one.

--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"It takes less than 2 seconds to get to 78% complete; that's a good sign.
A few seconds later it's at 90%, but it seems to have stuck there. Did
somebody make percentages logarithmic while I wasn't looking?"
http://smylers.hates-software.com/2005/09/08/1995c749.html

#8Nathan Bossart
nathandbossart@gmail.com
In reply to: Alvaro Herrera (#7)
Re: Inefficiency in parallel pg_restore with many tables

On Tue, Jul 18, 2023 at 06:05:11PM +0200, Alvaro Herrera wrote:

On 2023-Jul-17, Nathan Bossart wrote:

@@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)
binaryheap *heap;

sz = offsetof(binaryheap, bh_nodes) + sizeof(Datum) * capacity;
+#ifdef FRONTEND
+	heap = (binaryheap *) pg_malloc(sz);
+#else
heap = (binaryheap *) palloc(sz);
+#endif

Hmm, as I recall fe_memutils.c provides you with palloc() in the
frontend environment, so you don't actually need this one.

Ah, yes it does. Thanks for the pointer.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#9Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#8)
Re: Inefficiency in parallel pg_restore with many tables

Here is a work-in-progress patch set for converting ready_list to a
priority queue. On my machine, Tom's 100k-table example [0]/messages/by-id/3612876.1689443232@sss.pgh.pa.us takes 11.5
minutes without these patches and 1.5 minutes with them.

One item that requires more thought is binaryheap's use of Datum. AFAICT
the Datum definitions live in postgres.h and aren't available to frontend
code. I think we'll either need to move the Datum definitions to c.h or to
adjust binaryheap to use "void *".

[0]: /messages/by-id/3612876.1689443232@sss.pgh.pa.us

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

v2-0001-misc-binaryheap-fixes.patchtext/x-diff; charset=us-asciiDownload+7-8
v2-0002-make-binaryheap-available-to-frontend.patchtext/x-diff; charset=us-asciiDownload+46-3
v2-0003-expand-binaryheap-api.patchtext/x-diff; charset=us-asciiDownload+31-1
v2-0004-use-priority-queue-for-pg_restore-ready_list.patchtext/x-diff; charset=us-asciiDownload+35-121
#10Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#9)
Re: Inefficiency in parallel pg_restore with many tables

On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:

Here is a work-in-progress patch set for converting ready_list to a
priority queue. On my machine, Tom's 100k-table example [0] takes 11.5
minutes without these patches and 1.5 minutes with them.

One item that requires more thought is binaryheap's use of Datum. AFAICT
the Datum definitions live in postgres.h and aren't available to frontend
code. I think we'll either need to move the Datum definitions to c.h or to
adjust binaryheap to use "void *".

In v3, I moved the Datum definitions to c.h. I first tried modifying
binaryheap to use "int" or "void *" instead, but that ended up requiring
some rather invasive changes in backend code, not to mention any extensions
that happen to be using it. I also looked into moving the definitions to a
separate datumdefs.h header that postgres.h would include, but that felt
awkward because 1) postgres.h clearly states that it is intended for things
"that never escape the backend" and 2) the definitions seem relatively
inexpensive. However, I think the latter option is still viable, so I'm
fine with switching to it if folks think that is a better approach.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

v3-0001-move-datum-definitions-to-c.h.patchtext/x-diff; charset=us-asciiDownload+515-517
v3-0002-make-binaryheap-available-to-frontend.patchtext/x-diff; charset=us-asciiDownload+18-4
v3-0003-expand-binaryheap-api.patchtext/x-diff; charset=us-asciiDownload+32-1
v3-0004-use-priority-queue-for-pg_restore-ready_list.patchtext/x-diff; charset=us-asciiDownload+56-141
#11Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#10)
Re: Inefficiency in parallel pg_restore with many tables

On Sat, Jul 22, 2023 at 04:19:41PM -0700, Nathan Bossart wrote:

In v3, I moved the Datum definitions to c.h. I first tried modifying
binaryheap to use "int" or "void *" instead, but that ended up requiring
some rather invasive changes in backend code, not to mention any extensions
that happen to be using it. I also looked into moving the definitions to a
separate datumdefs.h header that postgres.h would include, but that felt
awkward because 1) postgres.h clearly states that it is intended for things
"that never escape the backend" and 2) the definitions seem relatively
inexpensive. However, I think the latter option is still viable, so I'm
fine with switching to it if folks think that is a better approach.

BTW we might be able to replace the open-coded heap in pg_dump_sort.c
(added by 79273cc) with a binaryheap, too.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#10)
Re: Inefficiency in parallel pg_restore with many tables

Nathan Bossart <nathandbossart@gmail.com> writes:

On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:

One item that requires more thought is binaryheap's use of Datum. AFAICT
the Datum definitions live in postgres.h and aren't available to frontend
code. I think we'll either need to move the Datum definitions to c.h or to
adjust binaryheap to use "void *".

In v3, I moved the Datum definitions to c.h. I first tried modifying
binaryheap to use "int" or "void *" instead, but that ended up requiring
some rather invasive changes in backend code, not to mention any extensions
that happen to be using it.

I'm quite uncomfortable with putting Datum in c.h. I know that the
typedef is merely a uintptr_t, but this solution seems to me to be
blowing all kinds of holes in the abstraction, because exactly none
of the infrastructure that goes along with Datum is or is ever likely
to be in any frontend build. At the very least, frontend code that
refers to Datum will be misleading as hell.

I wonder whether we can't provide some alternate definition or "skin"
for binaryheap that preserves the Datum API for backend code that wants
that, while providing a void *-based API for frontend code to use.

regards, tom lane

#13Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#12)
Re: Inefficiency in parallel pg_restore with many tables

On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:

Nathan Bossart <nathandbossart@gmail.com> writes:

I first tried modifying
binaryheap to use "int" or "void *" instead, but that ended up requiring
some rather invasive changes in backend code, not to mention any extensions
that happen to be using it.

I followed through with the "void *" approach (attached), and it wasn't as
bad as I expected.

I wonder whether we can't provide some alternate definition or "skin"
for binaryheap that preserves the Datum API for backend code that wants
that, while providing a void *-based API for frontend code to use.

I can give this a try next, but it might be rather #ifdef-heavy.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

v4-0001-use-void-instead-of-Datum-in-binaryheap.patchtext/x-diff; charset=us-asciiDownload+61-66
v4-0002-make-binaryheap-available-to-frontend.patchtext/x-diff; charset=us-asciiDownload+18-4
v4-0003-expand-binaryheap-api.patchtext/x-diff; charset=us-asciiDownload+32-1
v4-0004-use-priority-queue-for-pg_restore-ready_list.patchtext/x-diff; charset=us-asciiDownload+53-141
#14Pierre Ducroquet
p.psql@pinaraf.info
In reply to: Tom Lane (#1)
Re: Inefficiency in parallel pg_restore with many tables

On Saturday, July 15, 2023 7:47:12 PM CEST Tom Lane wrote:

I'm not sure how big a deal this is in practice: in most situations
the individual jobs are larger than they are in this toy example,
plus the initial non-parallelizable part of the restore is a bigger
bottleneck anyway with this many tables. Still, we do have one
real-world complaint, so maybe we should look into improving it.

Hi

For what it's worth, at my current job it's kind of a big deal. I was going to
start looking at the bad performance I got on pg_restore for some databases
with over 50k tables (in 200 namespaces) when I found this thread. The dump
weights in about 2,8GB, the toc.dat file is 230MB, 50 120 tables, 142 069
constraints and 73 669 indexes.

HEAD pg_restore duration: 30 minutes
pg_restore with latest patch from Nathan Bossart: 23 minutes

This is indeed better, but there is still a lot of room for improvements. With
such usecases, I was able to go much faster using the patched pg_restore with
a script that parallelize on each schema instead of relying on the choices
made by pg_restore. It seems the choice of parallelizing only the data loading
is losing nice speedup opportunities with a huge number of objects.

patched pg_restore + parallel restore of schemas: 10 minutes

Anyway, the patch works really fine as is, and I will certainly keep trying
future iterations.

Regards

Pierre

#15Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#13)
Re: Inefficiency in parallel pg_restore with many tables

On Sat, Jul 22, 2023 at 10:57:03PM -0700, Nathan Bossart wrote:

On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:

I wonder whether we can't provide some alternate definition or "skin"
for binaryheap that preserves the Datum API for backend code that wants
that, while providing a void *-based API for frontend code to use.

I can give this a try next, but it might be rather #ifdef-heavy.

Here is a sketch of this approach. It required fewer #ifdefs than I was
expecting. At the moment, this one seems like the winner to me.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

v5-0001-make-binaryheap-available-to-frontend.patchtext/x-diff; charset=us-asciiDownload+44-21
v5-0002-expand-binaryheap-api.patchtext/x-diff; charset=us-asciiDownload+32-1
v5-0003-use-priority-queue-for-pg_restore-ready_list.patchtext/x-diff; charset=us-asciiDownload+52-141
#16Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#15)
Re: Inefficiency in parallel pg_restore with many tables

On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:

Here is a sketch of this approach. It required fewer #ifdefs than I was
expecting. At the moment, this one seems like the winner to me.

Here is a polished patch set for this approach. I've also added a 0004
that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.
IMHO these patches are in decent shape.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Attachments:

v6-0001-Make-binaryheap-available-to-frontend-code.patchtext/x-diff; charset=iso-8859-1Download+47-21
v6-0002-Add-function-for-removing-arbitrary-nodes-in-bina.patchtext/x-diff; charset=us-asciiDownload+32-1
v6-0003-Convert-pg_restore-s-ready_list-to-a-priority-que.patchtext/x-diff; charset=us-asciiDownload+57-141
v6-0004-Remove-open-coded-binary-heap-in-pg_dump_sort.c.patchtext/x-diff; charset=us-asciiDownload+22-84
#17Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#16)
Re: Inefficiency in parallel pg_restore with many tables

On Tue, Jul 25, 2023 at 11:53:36AM -0700, Nathan Bossart wrote:

Here is a polished patch set for this approach. I've also added a 0004
that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.
IMHO these patches are in decent shape.

I'm hoping to commit these patches at some point in the current commitfest.
I don't sense anything tremendously controversial, and they provide a
pretty nice speedup in some cases. Are there any remaining concerns?

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#17)
Re: Inefficiency in parallel pg_restore with many tables

Nathan Bossart <nathandbossart@gmail.com> writes:

I'm hoping to commit these patches at some point in the current commitfest.
I don't sense anything tremendously controversial, and they provide a
pretty nice speedup in some cases. Are there any remaining concerns?

I've not actually looked at any of these patchsets after the first one.
I have added myself as a reviewer and will hopefully get to it within
a week or so.

regards, tom lane

#19Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#18)
Re: Inefficiency in parallel pg_restore with many tables

On Fri, Sep 01, 2023 at 01:41:41PM -0400, Tom Lane wrote:

I've not actually looked at any of these patchsets after the first one.
I have added myself as a reviewer and will hopefully get to it within
a week or so.

Thanks!

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#20Robert Haas
robertmhaas@gmail.com
In reply to: Nathan Bossart (#16)
Re: Inefficiency in parallel pg_restore with many tables

On Tue, Jul 25, 2023 at 2:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:

On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:

Here is a sketch of this approach. It required fewer #ifdefs than I was
expecting. At the moment, this one seems like the winner to me.

Here is a polished patch set for this approach. I've also added a 0004
that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.
IMHO these patches are in decent shape.

[ drive-by comment that hopefully doesn't cause too much pain ]

In hindsight, I think that making binaryheap depend on Datum was a bad
idea. I think that was my idea, and I think it wasn't very smart.
Considering that people have coded to that decision up until now, it
might not be too easy to change at this point. But in principle I
guess you'd want to be able to make a heap out of any C data type,
rather than just Datum, or just Datum in the backend and just void *
in the frontend.

--
Robert Haas
EDB: http://www.enterprisedb.com

#21Nathan Bossart
nathandbossart@gmail.com
In reply to: Robert Haas (#20)
#22Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#21)
#23Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Nathan Bossart (#22)
#24Nathan Bossart
nathandbossart@gmail.com
In reply to: Alvaro Herrera (#23)
#25Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#22)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#25)
#27Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#26)
#28Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#27)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#28)
#30Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#29)
#31Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#30)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#31)
#33Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#32)
#34Michael Paquier
michael@paquier.xyz
In reply to: Tom Lane (#32)
#35Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#33)
#36Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#35)
#37Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#31)