Use streaming read API in ANALYZE

Started by Nazir Bilal Yavuzabout 2 years ago39 messageshackers
Jump to latest
#1Nazir Bilal Yavuz
byavuz81@gmail.com

Hi,

I worked on using the currently proposed streaming read API [1]/messages/by-id/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com in ANALYZE.
The patch is attached. 0001 is the not yet merged streaming read API code
changes that can be applied to the master, 0002 is the actual code.

The blocks to analyze are obtained by using the streaming read API now.

- Since streaming read API is already doing prefetch, I removed the #ifdef
USE_PREFETCH code from acquire_sample_rows().

- Changed 'while (BlockSampler_HasMore(&bs))' to 'while (nblocks)' because
the prefetch mechanism in the streaming read API will advance 'bs' before
returning buffers.

- Removed BlockNumber and BufferAccessStrategy from the declaration of
scan_analyze_next_block(), passing pgsr (PgStreamingRead) instead of them.

I counted syscalls of analyzing ~5GB table. It can be seen that the patched
version did ~1300 less read calls.

Patched:

% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.67 0.012128 0 29809 pwrite64
36.96 0.011299 0 28594 pread64
23.24 0.007104 0 27611 fadvise64

Master (21a71648d3):

% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
38.94 0.016457 0 29816 pwrite64
36.79 0.015549 0 29850 pread64
23.91 0.010106 0 29848 fadvise64

Any kind of feedback would be appreciated.

[1]: /messages/by-id/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com
/messages/by-id/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v1-0001-Streaming-read-API-changes-that-are-not-committed.patchtext/x-diff; charset=US-ASCII; name=v1-0001-Streaming-read-API-changes-that-are-not-committed.patchDownload+953-224
v1-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-diff; charset=US-ASCII; name=v1-0002-Use-streaming-read-API-in-ANALYZE.patchDownload+45-80
#2Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Nazir Bilal Yavuz (#1)
Re: Use streaming read API in ANALYZE

Hi,

On Mon, 19 Feb 2024 at 18:13, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

I worked on using the currently proposed streaming read API [1] in ANALYZE. The patch is attached. 0001 is the not yet merged streaming read API code changes that can be applied to the master, 0002 is the actual code.

The blocks to analyze are obtained by using the streaming read API now.

- Since streaming read API is already doing prefetch, I removed the #ifdef USE_PREFETCH code from acquire_sample_rows().

- Changed 'while (BlockSampler_HasMore(&bs))' to 'while (nblocks)' because the prefetch mechanism in the streaming read API will advance 'bs' before returning buffers.

- Removed BlockNumber and BufferAccessStrategy from the declaration of scan_analyze_next_block(), passing pgsr (PgStreamingRead) instead of them.

I counted syscalls of analyzing ~5GB table. It can be seen that the patched version did ~1300 less read calls.

Patched:

% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.67 0.012128 0 29809 pwrite64
36.96 0.011299 0 28594 pread64
23.24 0.007104 0 27611 fadvise64

Master (21a71648d3):

% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
38.94 0.016457 0 29816 pwrite64
36.79 0.015549 0 29850 pread64
23.91 0.010106 0 29848 fadvise64

Any kind of feedback would be appreciated.

[1]: /messages/by-id/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com

The new version of the streaming read API [1]/messages/by-id/CA+hUKGJtLyxcAEvLhVUhgD4fMQkOu3PDaj8Qb9SR_UsmzgsBpQ@mail.gmail.com is posted. I updated the
streaming read API changes patch (0001), using the streaming read API
in ANALYZE patch (0002) remains the same. This should make it easier
to review as it can be applied on top of master

[1]: /messages/by-id/CA+hUKGJtLyxcAEvLhVUhgD4fMQkOu3PDaj8Qb9SR_UsmzgsBpQ@mail.gmail.com

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v2-0001-Streaming-read-API-changes-that-are-not-committed.patchtext/x-diff; charset=US-ASCII; name=v2-0001-Streaming-read-API-changes-that-are-not-committed.patchDownload+1202-234
v2-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-diff; charset=US-ASCII; name=v2-0002-Use-streaming-read-API-in-ANALYZE.patchDownload+45-80
#3Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Nazir Bilal Yavuz (#2)
Re: Use streaming read API in ANALYZE

Hi,

On Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

The new version of the streaming read API [1] is posted. I updated the
streaming read API changes patch (0001), using the streaming read API
in ANALYZE patch (0002) remains the same. This should make it easier
to review as it can be applied on top of master

The new version of the streaming read API is posted [1]/messages/by-id/CA+hUKGL-ONQnnnp-SONCFfLJzqcpAheuzZ+-yTrD9WBM-GmAcg@mail.gmail.com. I rebased the
patch on top of master and v9 of the streaming read API.

There is a minimal change in the 'using the streaming read API in ANALYZE
patch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE
to copy exactly the same behavior as before. Also, some benchmarking
results:

I created a 22 GB table and set the size of shared buffers to 30GB, the
rest is default.

╔═══════════════════════════╦═════════════════════╦════════════╗
║ ║ Avg Timings in ms ║ ║
╠═══════════════════════════╬══════════╦══════════╬════════════╣
║ ║ master ║ patched ║ percentage ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Both OS cache and ║ ║ ║ ║
║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ OS cache is loaded but ║ ║ ║ ║
║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Shared buffers are loaded ║ ║ ║ ║
║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║
╚═══════════════════════════╩══════════╩══════════╩════════════╝

Any kind of feedback would be appreciated.

[1]: /messages/by-id/CA+hUKGL-ONQnnnp-SONCFfLJzqcpAheuzZ+-yTrD9WBM-GmAcg@mail.gmail.com
/messages/by-id/CA+hUKGL-ONQnnnp-SONCFfLJzqcpAheuzZ+-yTrD9WBM-GmAcg@mail.gmail.com

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v3-0001-Streaming-read-API-changes-that-are-not-committed.patchtext/x-patch; charset=US-ASCII; name=v3-0001-Streaming-read-API-changes-that-are-not-committed.patchDownload+1309-238
v3-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v3-0002-Use-streaming-read-API-in-ANALYZE.patchDownload+45-80
#4Melanie Plageman
melanieplageman@gmail.com
In reply to: Nazir Bilal Yavuz (#3)
Re: Use streaming read API in ANALYZE

On Tue, Mar 26, 2024 at 02:51:27PM +0300, Nazir Bilal Yavuz wrote:

Hi,

On Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

The new version of the streaming read API [1] is posted. I updated the
streaming read API changes patch (0001), using the streaming read API
in ANALYZE patch (0002) remains the same. This should make it easier
to review as it can be applied on top of master

The new version of the streaming read API is posted [1]. I rebased the
patch on top of master and v9 of the streaming read API.

There is a minimal change in the 'using the streaming read API in ANALYZE
patch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE
to copy exactly the same behavior as before. Also, some benchmarking
results:

I created a 22 GB table and set the size of shared buffers to 30GB, the
rest is default.

╔═══════════════════════════╦═════════════════════╦════════════╗
║ ║ Avg Timings in ms ║ ║
╠═══════════════════════════╬══════════╦══════════╬════════════╣
║ ║ master ║ patched ║ percentage ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Both OS cache and ║ ║ ║ ║
║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ OS cache is loaded but ║ ║ ║ ║
║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Shared buffers are loaded ║ ║ ║ ║
║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║
╚═══════════════════════════╩══════════╩══════════╩════════════╝

Any kind of feedback would be appreciated.

Thanks for working on this!

A general review comment: I noticed you have the old streaming read
(pgsr) naming still in a few places (including comments) -- so I would
just make sure and update everywhere when you rebase in Thomas' latest
version of the read stream API.

From c7500cc1b9068ff0b704181442999cd8bed58658 Mon Sep 17 00:00:00 2001
From: Nazir Bilal Yavuz <byavuz81@gmail.com>
Date: Mon, 19 Feb 2024 14:30:47 +0300
Subject: [PATCH v3 2/2] Use streaming read API in ANALYZE

--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1102,6 +1102,26 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
return stats;
}
+/*
+ * Prefetch callback function to get next block number while using
+ * BlockSampling algorithm
+ */
+static BlockNumber
+pg_block_sampling_streaming_read_next(StreamingRead *stream,
+									  void *user_data,
+									  void *per_buffer_data)

I don't think you need the pg_ prefix

+{
+	BlockSamplerData *bs = user_data;
+	BlockNumber *current_block = per_buffer_data;

Why can't you just do BufferGetBlockNumber() on the buffer returned from
the read stream API instead of allocating per_buffer_data for the block
number?

+
+	if (BlockSampler_HasMore(bs))
+		*current_block = BlockSampler_Next(bs);
+	else
+		*current_block = InvalidBlockNumber;
+
+	return *current_block;

I think we'd like to keep the read stream code in heapam-specific code.
Instead of doing streaming_read_buffer_begin() here, you could put this
in heap_beginscan() or initscan() guarded by
scan->rs_base.rs_flags & SO_TYPE_ANALYZE

same with streaming_read_buffer_end()/heap_endscan().

You'd also then need to save the reference to the read stream in the
HeapScanDescData.

+	stream = streaming_read_buffer_begin(STREAMING_READ_MAINTENANCE,
+										 vac_strategy,
+										 BMR_REL(scan->rs_rd),
+										 MAIN_FORKNUM,
+										 pg_block_sampling_streaming_read_next,
+										 &bs,
+										 sizeof(BlockSamplerData));

/* Outer loop over blocks to sample */

In fact, I think you could use this opportunity to get rid of the block
dependency in acquire_sample_rows() altogether.

Looking at the code now, it seems like you could just invoke
heapam_scan_analyze_next_block() (maybe rename it to
heapam_scan_analyze_next_buffer() or something) from
heapam_scan_analyze_next_tuple() and remove
table_scan_analyze_next_block() entirely.

Then table AMs can figure out how they want to return tuples from
table_scan_analyze_next_tuple().

If you do all this, note that you'll need to update the comments above
acquire_sample_rows() accordingly.

-	while (BlockSampler_HasMore(&bs))
+	while (nblocks)
{
bool		block_accepted;
-		BlockNumber targblock = BlockSampler_Next(&bs);
-#ifdef USE_PREFETCH
-		BlockNumber prefetch_targblock = InvalidBlockNumber;
-
-		/*
-		 * Make sure that every time the main BlockSampler is moved forward
-		 * that our prefetch BlockSampler also gets moved forward, so that we
-		 * always stay out ahead.
-		 */
-		if (prefetch_maximum && BlockSampler_HasMore(&prefetch_bs))
-			prefetch_targblock = BlockSampler_Next(&prefetch_bs);
-#endif

vacuum_delay_point();

-		block_accepted = table_scan_analyze_next_block(scan, targblock, vac_strategy);
+		block_accepted = table_scan_analyze_next_block(scan, stream);

- Melanie

#5Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Melanie Plageman (#4)
Re: Use streaming read API in ANALYZE

Hi,

Thanks for the review!

On Wed, 27 Mar 2024 at 23:15, Melanie Plageman
<melanieplageman@gmail.com> wrote:

On Tue, Mar 26, 2024 at 02:51:27PM +0300, Nazir Bilal Yavuz wrote:

Hi,

On Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

The new version of the streaming read API [1] is posted. I updated the
streaming read API changes patch (0001), using the streaming read API
in ANALYZE patch (0002) remains the same. This should make it easier
to review as it can be applied on top of master

The new version of the streaming read API is posted [1]. I rebased the
patch on top of master and v9 of the streaming read API.

There is a minimal change in the 'using the streaming read API in ANALYZE
patch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE
to copy exactly the same behavior as before. Also, some benchmarking
results:

I created a 22 GB table and set the size of shared buffers to 30GB, the
rest is default.

╔═══════════════════════════╦═════════════════════╦════════════╗
║ ║ Avg Timings in ms ║ ║
╠═══════════════════════════╬══════════╦══════════╬════════════╣
║ ║ master ║ patched ║ percentage ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Both OS cache and ║ ║ ║ ║
║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ OS cache is loaded but ║ ║ ║ ║
║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║
╠═══════════════════════════╬══════════╬══════════╬════════════╣
║ Shared buffers are loaded ║ ║ ║ ║
║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║
╚═══════════════════════════╩══════════╩══════════╩════════════╝

Any kind of feedback would be appreciated.

Thanks for working on this!

A general review comment: I noticed you have the old streaming read
(pgsr) naming still in a few places (including comments) -- so I would
just make sure and update everywhere when you rebase in Thomas' latest
version of the read stream API.

Done.

From c7500cc1b9068ff0b704181442999cd8bed58658 Mon Sep 17 00:00:00 2001
From: Nazir Bilal Yavuz <byavuz81@gmail.com>
Date: Mon, 19 Feb 2024 14:30:47 +0300
Subject: [PATCH v3 2/2] Use streaming read API in ANALYZE

--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1102,6 +1102,26 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
return stats;
}
+/*
+ * Prefetch callback function to get next block number while using
+ * BlockSampling algorithm
+ */
+static BlockNumber
+pg_block_sampling_streaming_read_next(StreamingRead *stream,
+                                                                       void *user_data,
+                                                                       void *per_buffer_data)

I don't think you need the pg_ prefix

Done.

+{
+     BlockSamplerData *bs = user_data;
+     BlockNumber *current_block = per_buffer_data;

Why can't you just do BufferGetBlockNumber() on the buffer returned from
the read stream API instead of allocating per_buffer_data for the block
number?

Done.

+
+     if (BlockSampler_HasMore(bs))
+             *current_block = BlockSampler_Next(bs);
+     else
+             *current_block = InvalidBlockNumber;
+
+     return *current_block;

I think we'd like to keep the read stream code in heapam-specific code.
Instead of doing streaming_read_buffer_begin() here, you could put this
in heap_beginscan() or initscan() guarded by
scan->rs_base.rs_flags & SO_TYPE_ANALYZE

In the recent changes [1]27bc1772fc814946918a5ac8ccb9b5c5ad0380aa, heapam_scan_analyze_next_[block | tuple]
are removed from tableam. They are directly called from
heapam-specific code now. So, IMO, no need to do this now.

v4 is rebased on top of v14 streaming read API changes.

[1]: 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v4-0001-v14-Streaming-Read-API.patchtext/x-patch; charset=US-ASCII; name=v4-0001-v14-Streaming-Read-API.patchDownload+1509-259
v4-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v4-0002-Use-streaming-read-API-in-ANALYZE.patchDownload+36-70
#6Jakub Wartak
jakub.wartak@enterprisedb.com
In reply to: Nazir Bilal Yavuz (#5)
Re: Use streaming read API in ANALYZE

On Tue, Apr 2, 2024 at 9:24 AM Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:
[..]

v4 is rebased on top of v14 streaming read API changes.

Hi Nazir, so with streaming API committed, I gave a try to this patch.
With autovacuum=off and 30GB table on NVMe (with standard readahead of
256kb and ext4, Debian 12, kernel 6.1.0, shared_buffers = 128MB
default) created using: create table t as select repeat('a', 100) || i
|| repeat('b', 500) as filler from generate_series(1, 45000000) as i;

on master, effect of mainteance_io_concurency [default 10] is like
that (when resetting the fs cache after each ANALYZE):

m_io_c = 0:
Time: 3137.914 ms (00:03.138)
Time: 3094.540 ms (00:03.095)
Time: 3452.513 ms (00:03.453)

m_io_c = 1:
Time: 2972.751 ms (00:02.973)
Time: 2939.551 ms (00:02.940)
Time: 2904.428 ms (00:02.904)

m_io_c = 2:
Time: 1580.260 ms (00:01.580)
Time: 1572.132 ms (00:01.572)
Time: 1558.334 ms (00:01.558)

m_io_c = 4:
Time: 938.304 ms
Time: 931.772 ms
Time: 920.044 ms

m_io_c = 8:
Time: 666.025 ms
Time: 660.241 ms
Time: 648.848 ms

m_io_c = 16:
Time: 542.450 ms
Time: 561.155 ms
Time: 539.683 ms

m_io_c = 32:
Time: 538.487 ms
Time: 541.705 ms
Time: 538.101 ms

with patch applied:

m_io_c = 0:
Time: 3106.469 ms (00:03.106)
Time: 3140.343 ms (00:03.140)
Time: 3044.133 ms (00:03.044)

m_io_c = 1:
Time: 2959.817 ms (00:02.960)
Time: 2920.265 ms (00:02.920)
Time: 2911.745 ms (00:02.912)

m_io_c = 2:
Time: 1581.912 ms (00:01.582)
Time: 1561.444 ms (00:01.561)
Time: 1558.251 ms (00:01.558)

m_io_c = 4:
Time: 908.116 ms
Time: 901.245 ms
Time: 901.071 ms

m_io_c = 8:
Time: 619.870 ms
Time: 620.327 ms
Time: 614.266 ms

m_io_c = 16:
Time: 529.885 ms
Time: 526.958 ms
Time: 528.474 ms

m_io_c = 32:
Time: 521.185 ms
Time: 520.713 ms
Time: 517.729 ms

No difference to me, which seems to be good. I've double checked and
patch used the new way

acquire_sample_rows -> heapam_scan_analyze_next_block ->
ReadBufferExtended -> ReadBuffer_common (inlined) -> WaitReadBuffers
-> mdreadv -> FileReadV -> pg_preadv (inlined)
acquire_sample_rows -> heapam_scan_analyze_next_block ->
ReadBufferExtended -> ReadBuffer_common (inlined) -> StartReadBuffer
-> ...

I gave also io_combine_limit to 32 (max, 256kB) a try and got those
slightly better results:

[..]
m_io_c = 16:
Time: 494.599 ms
Time: 496.345 ms
Time: 973.500 ms

m_io_c = 32:
Time: 461.031 ms
Time: 449.037 ms
Time: 443.375 ms

and that (last one) apparently was able to push it to ~50-60k still
random IOPS range, the rareq-sz was still ~8 (9.9) kB as analyze was
still reading random , so I assume no merging was done:

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz
w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s
drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 61212.00 591.82 0.00 0.00 0.10 9.90
2.00 0.02 0.00 0.00 0.00 12.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 6.28 85.20

So in short it looks good to me.

-Jakub Wartak.

#7Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Nazir Bilal Yavuz (#5)
Re: Use streaming read API in ANALYZE

Hi,

On Tue, 2 Apr 2024 at 10:23, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

v4 is rebased on top of v14 streaming read API changes.

Streaming API has been committed but the committed version has a minor
change, the read_stream_begin_relation function takes Relation instead
of BufferManagerRelation now. So, here is a v5 which addresses this
change.

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v5-0001-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v5-0001-Use-streaming-read-API-in-ANALYZE.patchDownload+36-70
#8Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Nazir Bilal Yavuz (#7)
Re: Use streaming read API in ANALYZE

On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:

Streaming API has been committed but the committed version has a minor
change, the read_stream_begin_relation function takes Relation instead
of BufferManagerRelation now. So, here is a v5 which addresses this
change.

I'm getting a repeatable segfault / assertion failure with this:

postgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);
CREATE TABLE
postgres=# insert into tengiga select g, repeat('x', 900) from
generate_series(1, 1400000) g;
INSERT 0 1400000
postgres=# set default_statistics_target = 10; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target = 100; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target =1000; ANALYZE tengiga;
SET
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

TRAP: failed Assert("BufferIsValid(hscan->rs_cbuf)"), File:
"heapam_handler.c", Line: 1079, PID: 262232
postgres: heikki postgres [local]
ANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]
postgres: heikki postgres [local]
ANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]
postgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]
postgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]
postgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]
postgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]
postgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]
postgres: heikki postgres [local]
ANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]
postgres: heikki postgres [local]
ANALYZE(ProcessUtility+0x136)[0x564889f0afb1]
postgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]
postgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]
postgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]
postgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]
postgres: heikki postgres [local]
ANALYZE(PostgresMain+0x80c)[0x564889f06fd7]
postgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]
postgres: heikki postgres [local]
ANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]
postgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]
postgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]
postgres: heikki postgres [local]
ANALYZE(PostmasterMain+0x152b)[0x564889e2214d]
postgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]
/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]
postgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]
2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232)
was terminated by signal 6: Aborted

--
Heikki Linnakangas
Neon (https://neon.tech)

#9Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Jakub Wartak (#6)
Re: Use streaming read API in ANALYZE

Hi Jakub,

Thank you for looking into this and doing a performance analysis.

On Wed, 3 Apr 2024 at 11:42, Jakub Wartak <jakub.wartak@enterprisedb.com> wrote:

On Tue, Apr 2, 2024 at 9:24 AM Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:
[..]

v4 is rebased on top of v14 streaming read API changes.

Hi Nazir, so with streaming API committed, I gave a try to this patch.
With autovacuum=off and 30GB table on NVMe (with standard readahead of
256kb and ext4, Debian 12, kernel 6.1.0, shared_buffers = 128MB
default) created using: create table t as select repeat('a', 100) || i
|| repeat('b', 500) as filler from generate_series(1, 45000000) as i;

on master, effect of mainteance_io_concurency [default 10] is like
that (when resetting the fs cache after each ANALYZE):

m_io_c = 0:
Time: 3137.914 ms (00:03.138)
Time: 3094.540 ms (00:03.095)
Time: 3452.513 ms (00:03.453)

m_io_c = 1:
Time: 2972.751 ms (00:02.973)
Time: 2939.551 ms (00:02.940)
Time: 2904.428 ms (00:02.904)

m_io_c = 2:
Time: 1580.260 ms (00:01.580)
Time: 1572.132 ms (00:01.572)
Time: 1558.334 ms (00:01.558)

m_io_c = 4:
Time: 938.304 ms
Time: 931.772 ms
Time: 920.044 ms

m_io_c = 8:
Time: 666.025 ms
Time: 660.241 ms
Time: 648.848 ms

m_io_c = 16:
Time: 542.450 ms
Time: 561.155 ms
Time: 539.683 ms

m_io_c = 32:
Time: 538.487 ms
Time: 541.705 ms
Time: 538.101 ms

with patch applied:

m_io_c = 0:
Time: 3106.469 ms (00:03.106)
Time: 3140.343 ms (00:03.140)
Time: 3044.133 ms (00:03.044)

m_io_c = 1:
Time: 2959.817 ms (00:02.960)
Time: 2920.265 ms (00:02.920)
Time: 2911.745 ms (00:02.912)

m_io_c = 2:
Time: 1581.912 ms (00:01.582)
Time: 1561.444 ms (00:01.561)
Time: 1558.251 ms (00:01.558)

m_io_c = 4:
Time: 908.116 ms
Time: 901.245 ms
Time: 901.071 ms

m_io_c = 8:
Time: 619.870 ms
Time: 620.327 ms
Time: 614.266 ms

m_io_c = 16:
Time: 529.885 ms
Time: 526.958 ms
Time: 528.474 ms

m_io_c = 32:
Time: 521.185 ms
Time: 520.713 ms
Time: 517.729 ms

No difference to me, which seems to be good. I've double checked and
patch used the new way

acquire_sample_rows -> heapam_scan_analyze_next_block ->
ReadBufferExtended -> ReadBuffer_common (inlined) -> WaitReadBuffers
-> mdreadv -> FileReadV -> pg_preadv (inlined)
acquire_sample_rows -> heapam_scan_analyze_next_block ->
ReadBufferExtended -> ReadBuffer_common (inlined) -> StartReadBuffer
-> ...

I gave also io_combine_limit to 32 (max, 256kB) a try and got those
slightly better results:

[..]
m_io_c = 16:
Time: 494.599 ms
Time: 496.345 ms
Time: 973.500 ms

m_io_c = 32:
Time: 461.031 ms
Time: 449.037 ms
Time: 443.375 ms

and that (last one) apparently was able to push it to ~50-60k still
random IOPS range, the rareq-sz was still ~8 (9.9) kB as analyze was
still reading random , so I assume no merging was done:

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz
w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s
drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 61212.00 591.82 0.00 0.00 0.10 9.90
2.00 0.02 0.00 0.00 0.00 12.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 6.28 85.20

So in short it looks good to me.

My results are similar to yours, also I realized a bug while working
on your benchmarking cases. I will share the cause and the fix soon.

--
Regards,
Nazir Bilal Yavuz
Microsoft

#10Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Heikki Linnakangas (#8)
Re: Use streaming read API in ANALYZE

Hi,

Thank you for looking into this!

On Wed, 3 Apr 2024 at 20:17, Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:

Streaming API has been committed but the committed version has a minor
change, the read_stream_begin_relation function takes Relation instead
of BufferManagerRelation now. So, here is a v5 which addresses this
change.

I'm getting a repeatable segfault / assertion failure with this:

postgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);
CREATE TABLE
postgres=# insert into tengiga select g, repeat('x', 900) from
generate_series(1, 1400000) g;
INSERT 0 1400000
postgres=# set default_statistics_target = 10; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target = 100; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target =1000; ANALYZE tengiga;
SET
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

TRAP: failed Assert("BufferIsValid(hscan->rs_cbuf)"), File:
"heapam_handler.c", Line: 1079, PID: 262232
postgres: heikki postgres [local]
ANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]
postgres: heikki postgres [local]
ANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]
postgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]
postgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]
postgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]
postgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]
postgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]
postgres: heikki postgres [local]
ANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]
postgres: heikki postgres [local]
ANALYZE(ProcessUtility+0x136)[0x564889f0afb1]
postgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]
postgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]
postgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]
postgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]
postgres: heikki postgres [local]
ANALYZE(PostgresMain+0x80c)[0x564889f06fd7]
postgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]
postgres: heikki postgres [local]
ANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]
postgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]
postgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]
postgres: heikki postgres [local]
ANALYZE(PostmasterMain+0x152b)[0x564889e2214d]
postgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]
/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]
postgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]
2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232)
was terminated by signal 6: Aborted

I realized the same error while working on Jakub's benchmarking results.

Cause: I was using the nblocks variable to check how many blocks will
be returned from the streaming API. But I realized that sometimes the
number returned from BlockSampler_Init() is not equal to the number of
blocks that BlockSampler_Next() will return as BlockSampling algorithm
decides how many blocks to return on the fly by using some random
seeds.

There are a couple of solutions I thought of:

1- Use BlockSampler_HasMore() instead of nblocks in the main loop in
the acquire_sample_rows():

Streaming API uses this function to prefetch block numbers.
BlockSampler_HasMore() will reach to the end first as it is used while
prefetching, so it will start to return false while there are still
buffers to return from the streaming API. That will cause some buffers
at the end to not be processed.

2- Expose something (function, variable etc.) from the streaming API
to understand if the read is finished and there is no buffer to
return:

I think this works but I am not sure if the streaming API allows
something like that.

3- Check every buffer returned from the streaming API, if it is
invalid stop the main loop in the acquire_sample_rows():

This solves the problem but there will be two if checks for each
buffer returned,
- in heapam_scan_analyze_next_block() to check if the returned buffer is invalid
- to break main loop in acquire_sample_rows() if
heapam_scan_analyze_next_block() returns false
One of the if cases can be bypassed by moving
heapam_scan_analyze_next_block()'s code to the main loop in the
acquire_sample_rows().

I implemented the third solution, here is v6.

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v6-0001-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v6-0001-Use-streaming-read-API-in-ANALYZE.patchDownload+42-72
#11Melanie Plageman
melanieplageman@gmail.com
In reply to: Nazir Bilal Yavuz (#10)
Re: Use streaming read API in ANALYZE

On Wed, Apr 03, 2024 at 10:25:01PM +0300, Nazir Bilal Yavuz wrote:

I realized the same error while working on Jakub's benchmarking results.

Cause: I was using the nblocks variable to check how many blocks will
be returned from the streaming API. But I realized that sometimes the
number returned from BlockSampler_Init() is not equal to the number of
blocks that BlockSampler_Next() will return as BlockSampling algorithm
decides how many blocks to return on the fly by using some random
seeds.

There are a couple of solutions I thought of:

1- Use BlockSampler_HasMore() instead of nblocks in the main loop in
the acquire_sample_rows():

Streaming API uses this function to prefetch block numbers.
BlockSampler_HasMore() will reach to the end first as it is used while
prefetching, so it will start to return false while there are still
buffers to return from the streaming API. That will cause some buffers
at the end to not be processed.

2- Expose something (function, variable etc.) from the streaming API
to understand if the read is finished and there is no buffer to
return:

I think this works but I am not sure if the streaming API allows
something like that.

3- Check every buffer returned from the streaming API, if it is
invalid stop the main loop in the acquire_sample_rows():

This solves the problem but there will be two if checks for each
buffer returned,
- in heapam_scan_analyze_next_block() to check if the returned buffer is invalid
- to break main loop in acquire_sample_rows() if
heapam_scan_analyze_next_block() returns false
One of the if cases can be bypassed by moving
heapam_scan_analyze_next_block()'s code to the main loop in the
acquire_sample_rows().

I implemented the third solution, here is v6.

I've reviewed the patches inline below and attached a patch that has
some of my ideas on top of your patch.

From 8d396a42186325f920d5a05e7092d8e1b66f3cdf Mon Sep 17 00:00:00 2001
From: Nazir Bilal Yavuz <byavuz81@gmail.com>
Date: Wed, 3 Apr 2024 15:14:15 +0300
Subject: [PATCH v6] Use streaming read API in ANALYZE

ANALYZE command gets random tuples using BlockSampler algorithm. Use
streaming reads to get these tuples by using BlockSampler algorithm in
streaming read API prefetch logic.
---
src/include/access/heapam.h | 6 +-
src/backend/access/heap/heapam_handler.c | 22 +++---
src/backend/commands/analyze.c | 85 ++++++++----------------
3 files changed, 42 insertions(+), 71 deletions(-)

diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a307fb5f245..633caee9d95 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -25,6 +25,7 @@
#include "storage/bufpage.h"
#include "storage/dsm.h"
#include "storage/lockdefs.h"
+#include "storage/read_stream.h"
#include "storage/shm_toc.h"
#include "utils/relcache.h"
#include "utils/snapshot.h"
@@ -388,9 +389,8 @@ extern bool HeapTupleIsSurelyDead(HeapTuple htup,
struct GlobalVisState *vistest);
/* in heap/heapam_handler.c*/
-extern void heapam_scan_analyze_next_block(TableScanDesc scan,
-										   BlockNumber blockno,
-										   BufferAccessStrategy bstrategy);
+extern bool heapam_scan_analyze_next_block(TableScanDesc scan,
+										   ReadStream *stream);
extern bool heapam_scan_analyze_next_tuple(TableScanDesc scan,
TransactionId OldestXmin,
double *liverows, double *deadrows,
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index 0952d4a98eb..d83fbbe6af3 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -1054,16 +1054,16 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
}
/*
- * Prepare to analyze block `blockno` of `scan`.  The scan has been started
- * with SO_TYPE_ANALYZE option.
+ * Prepare to analyze block returned from streaming object.  If the block returned
+ * from streaming object is valid, true is returned; otherwise false is returned.
+ * The scan has been started with SO_TYPE_ANALYZE option.
*
* This routine holds a buffer pin and lock on the heap page.  They are held
* until heapam_scan_analyze_next_tuple() returns false.  That is until all the
* items of the heap page are analyzed.
*/
-void
-heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
-							   BufferAccessStrategy bstrategy)
+bool
+heapam_scan_analyze_next_block(TableScanDesc scan, ReadStream *stream)
{
HeapScanDesc hscan = (HeapScanDesc) scan;

@@ -1076,11 +1076,15 @@ heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
* doing much work per tuple, the extra lock traffic is probably better
* avoided.

Personally I think heapam_scan_analyze_next_block() should be inlined.
It only has a few lines. I would find it clearer inline. At the least,
there is no reason for it (or heapam_scan_analyze_next_tuple()) to take
a TableScanDesc instead of a HeapScanDesc.

*/
-	hscan->rs_cblock = blockno;
-	hscan->rs_cindex = FirstOffsetNumber;
-	hscan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM,
-										blockno, RBM_NORMAL, bstrategy);
+	hscan->rs_cbuf = read_stream_next_buffer(stream, NULL);
+	if (hscan->rs_cbuf == InvalidBuffer)
+		return false;
+
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+	hscan->rs_cblock = BufferGetBlockNumber(hscan->rs_cbuf);
+	hscan->rs_cindex = FirstOffsetNumber;
+	return true;
}
/*
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 2fb39f3ede1..764520d5aa2 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1102,6 +1102,20 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
return stats;
}
+/*
+ * Prefetch callback function to get next block number while using
+ * BlockSampling algorithm
+ */
+static BlockNumber
+block_sampling_streaming_read_next(ReadStream *stream,
+								   void *user_data,
+								   void *per_buffer_data)
+{
+	BlockSamplerData *bs = user_data;
+
+	return BlockSampler_HasMore(bs) ? BlockSampler_Next(bs) : InvalidBlockNumber;

I don't see the point of BlockSampler_HasMore() anymore. I removed it in
the attached and made BlockSampler_Next() return InvalidBlockNumber
under the same conditions. Is there a reason not to do this? There
aren't other callers. If the BlockSampler_Next() wasn't part of an API,
we could just make it the streaming read callback, but that might be
weird as it is now.

That and my other ideas in attached. Let me know what you think.

- Melanie

Attachments:

v7-0001-Use-streaming-read-API-in-ANALYZE.patchtext/x-diff; charset=us-asciiDownload+42-72
v7-0002-some-ideas.patchtext/x-diff; charset=us-asciiDownload+27-66
#12Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Melanie Plageman (#11)
Re: Use streaming read API in ANALYZE

Hi,

On Wed, 3 Apr 2024 at 23:44, Melanie Plageman <melanieplageman@gmail.com> wrote:

I've reviewed the patches inline below and attached a patch that has
some of my ideas on top of your patch.

Thank you!

From 8d396a42186325f920d5a05e7092d8e1b66f3cdf Mon Sep 17 00:00:00 2001
From: Nazir Bilal Yavuz <byavuz81@gmail.com>
Date: Wed, 3 Apr 2024 15:14:15 +0300
Subject: [PATCH v6] Use streaming read API in ANALYZE

ANALYZE command gets random tuples using BlockSampler algorithm. Use
streaming reads to get these tuples by using BlockSampler algorithm in
streaming read API prefetch logic.
---
src/include/access/heapam.h | 6 +-
src/backend/access/heap/heapam_handler.c | 22 +++---
src/backend/commands/analyze.c | 85 ++++++++----------------
3 files changed, 42 insertions(+), 71 deletions(-)

diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a307fb5f245..633caee9d95 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -25,6 +25,7 @@
#include "storage/bufpage.h"
#include "storage/dsm.h"
#include "storage/lockdefs.h"
+#include "storage/read_stream.h"
#include "storage/shm_toc.h"
#include "utils/relcache.h"
#include "utils/snapshot.h"
@@ -388,9 +389,8 @@ extern bool HeapTupleIsSurelyDead(HeapTuple htup,
struct GlobalVisState *vistest);
/* in heap/heapam_handler.c*/
-extern void heapam_scan_analyze_next_block(TableScanDesc scan,
-                                                                                BlockNumber blockno,
-                                                                                BufferAccessStrategy bstrategy);
+extern bool heapam_scan_analyze_next_block(TableScanDesc scan,
+                                                                                ReadStream *stream);
extern bool heapam_scan_analyze_next_tuple(TableScanDesc scan,
TransactionId OldestXmin,
double *liverows, double *deadrows,
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index 0952d4a98eb..d83fbbe6af3 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -1054,16 +1054,16 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
}
/*
- * Prepare to analyze block `blockno` of `scan`.  The scan has been started
- * with SO_TYPE_ANALYZE option.
+ * Prepare to analyze block returned from streaming object.  If the block returned
+ * from streaming object is valid, true is returned; otherwise false is returned.
+ * The scan has been started with SO_TYPE_ANALYZE option.
*
* This routine holds a buffer pin and lock on the heap page.  They are held
* until heapam_scan_analyze_next_tuple() returns false.  That is until all the
* items of the heap page are analyzed.
*/
-void
-heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
-                                                        BufferAccessStrategy bstrategy)
+bool
+heapam_scan_analyze_next_block(TableScanDesc scan, ReadStream *stream)
{
HeapScanDesc hscan = (HeapScanDesc) scan;

@@ -1076,11 +1076,15 @@ heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
* doing much work per tuple, the extra lock traffic is probably better
* avoided.

Personally I think heapam_scan_analyze_next_block() should be inlined.
It only has a few lines. I would find it clearer inline. At the least,
there is no reason for it (or heapam_scan_analyze_next_tuple()) to take
a TableScanDesc instead of a HeapScanDesc.

I agree.

*/
-     hscan->rs_cblock = blockno;
-     hscan->rs_cindex = FirstOffsetNumber;
-     hscan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM,
-                                                                             blockno, RBM_NORMAL, bstrategy);
+     hscan->rs_cbuf = read_stream_next_buffer(stream, NULL);
+     if (hscan->rs_cbuf == InvalidBuffer)
+             return false;
+
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+     hscan->rs_cblock = BufferGetBlockNumber(hscan->rs_cbuf);
+     hscan->rs_cindex = FirstOffsetNumber;
+     return true;
}
/*
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 2fb39f3ede1..764520d5aa2 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1102,6 +1102,20 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
return stats;
}
+/*
+ * Prefetch callback function to get next block number while using
+ * BlockSampling algorithm
+ */
+static BlockNumber
+block_sampling_streaming_read_next(ReadStream *stream,
+                                                                void *user_data,
+                                                                void *per_buffer_data)
+{
+     BlockSamplerData *bs = user_data;
+
+     return BlockSampler_HasMore(bs) ? BlockSampler_Next(bs) : InvalidBlockNumber;

I don't see the point of BlockSampler_HasMore() anymore. I removed it in
the attached and made BlockSampler_Next() return InvalidBlockNumber
under the same conditions. Is there a reason not to do this? There
aren't other callers. If the BlockSampler_Next() wasn't part of an API,
we could just make it the streaming read callback, but that might be
weird as it is now.

I agree. There is no reason to have BlockSampler_HasMore() after
streaming read API changes.

That and my other ideas in attached. Let me know what you think.

I agree with your changes but I am not sure if others agree with all
the changes you have proposed. So, I didn't merge 0001 and your ideas
yet, instead I wrote a commit message, added some comments, changed ->
'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and
attached it as 0002.

--
Regards,
Nazir Bilal Yavuz
Microsoft

Attachments:

v8-0001-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v8-0001-Use-streaming-read-API-in-ANALYZE.patchDownload+42-72
v8-0002-Refactorings-on-top-of-using-streaming-read-API-i.patchtext/x-patch; charset=US-ASCII; name=v8-0002-Refactorings-on-top-of-using-streaming-read-API-i.patchDownload+44-74
#13Melanie Plageman
melanieplageman@gmail.com
In reply to: Nazir Bilal Yavuz (#12)
Re: Use streaming read API in ANALYZE

On Thu, Apr 04, 2024 at 02:03:30PM +0300, Nazir Bilal Yavuz wrote:

On Wed, 3 Apr 2024 at 23:44, Melanie Plageman <melanieplageman@gmail.com> wrote:

I don't see the point of BlockSampler_HasMore() anymore. I removed it in
the attached and made BlockSampler_Next() return InvalidBlockNumber
under the same conditions. Is there a reason not to do this? There
aren't other callers. If the BlockSampler_Next() wasn't part of an API,
we could just make it the streaming read callback, but that might be
weird as it is now.

I agree. There is no reason to have BlockSampler_HasMore() after
streaming read API changes.

That and my other ideas in attached. Let me know what you think.

I agree with your changes but I am not sure if others agree with all
the changes you have proposed. So, I didn't merge 0001 and your ideas
yet, instead I wrote a commit message, added some comments, changed ->
'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and
attached it as 0002.

I couldn't quite let go of those changes to acquire_sample_rows(), so
attached v9 0001 implements them as a preliminary patch before your
analyze streaming read user. I inlined heapam_scan_analyze_next_block()
entirely and made heapam_scan_analyze_next_tuple() a static function in
commands/analyze.c (and tweaked the name).

I made a few tweaks to your patch since it is on top of those changes
instead of preceding them. Then 0003 is removing BlockSampler_HasMore()
since it doesn't make sense to remove it before the streaming read user
was added.

- Melanie

Attachments:

v9-0001-Make-heapam_scan_analyze_next_-tuple-block-static.patchtext/x-diff; charset=us-asciiDownload+174-200
v9-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-diff; charset=us-asciiDownload+26-64
v9-0003-Obsolete-BlockSampler_HasMore.patchtext/x-diff; charset=us-asciiDownload+5-12
#14Melanie Plageman
melanieplageman@gmail.com
In reply to: Melanie Plageman (#13)
Re: Use streaming read API in ANALYZE

On Sun, Apr 7, 2024 at 3:57 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:

On Thu, Apr 04, 2024 at 02:03:30PM +0300, Nazir Bilal Yavuz wrote:

On Wed, 3 Apr 2024 at 23:44, Melanie Plageman <melanieplageman@gmail.com> wrote:

I don't see the point of BlockSampler_HasMore() anymore. I removed it in
the attached and made BlockSampler_Next() return InvalidBlockNumber
under the same conditions. Is there a reason not to do this? There
aren't other callers. If the BlockSampler_Next() wasn't part of an API,
we could just make it the streaming read callback, but that might be
weird as it is now.

I agree. There is no reason to have BlockSampler_HasMore() after
streaming read API changes.

That and my other ideas in attached. Let me know what you think.

I agree with your changes but I am not sure if others agree with all
the changes you have proposed. So, I didn't merge 0001 and your ideas
yet, instead I wrote a commit message, added some comments, changed ->
'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and
attached it as 0002.

I couldn't quite let go of those changes to acquire_sample_rows(), so
attached v9 0001 implements them as a preliminary patch before your
analyze streaming read user. I inlined heapam_scan_analyze_next_block()
entirely and made heapam_scan_analyze_next_tuple() a static function in
commands/analyze.c (and tweaked the name).

I made a few tweaks to your patch since it is on top of those changes
instead of preceding them. Then 0003 is removing BlockSampler_HasMore()
since it doesn't make sense to remove it before the streaming read user
was added.

I realized there were a few outdated comments. Fixed in attached v10.

- Melanie

Attachments:

v10-0001-Make-heapam_scan_analyze_next_-tuple-block-stati.patchtext/x-patch; charset=US-ASCII; name=v10-0001-Make-heapam_scan_analyze_next_-tuple-block-stati.patchDownload+179-205
v10-0002-Use-streaming-read-API-in-ANALYZE.patchtext/x-patch; charset=US-ASCII; name=v10-0002-Use-streaming-read-API-in-ANALYZE.patchDownload+26-64
v10-0003-Obsolete-BlockSampler_HasMore.patchtext/x-patch; charset=US-ASCII; name=v10-0003-Obsolete-BlockSampler_HasMore.patchDownload+5-12
#15Andres Freund
andres@anarazel.de
In reply to: Melanie Plageman (#14)
Re: Use streaming read API in ANALYZE

Hi,

On 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:

From 1dc2343661f3edb3b1bc4307afb0e956397eb76c Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 14:55:22 -0400
Subject: [PATCH v10 1/3] Make heapam_scan_analyze_next_[tuple|block] static.

27bc1772fc81 removed the table AM callbacks scan_analyze_next_block and
scan_analzye_next_tuple -- leaving their heap AM implementations only
called by acquire_sample_rows().

Ugh, I don't think 27bc1772fc81 makes much sense. But that's unrelated to this
thread. I did raise that separately
/messages/by-id/20240407214001.jgpg5q3yv33ve6y3@awork3.anarazel.de

Unless I seriously missed something, I see no alternative to reverting that
commit.

@@ -1206,11 +1357,13 @@ acquire_sample_rows(Relation onerel, int elevel,
break;

prefetch_block = BlockSampler_Next(&prefetch_bs);
-			PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_block);
+			PrefetchBuffer(scan->rs_base.rs_rd, MAIN_FORKNUM, prefetch_block);
}
}
#endif

+ scan->rs_cbuf = InvalidBuffer;
+
/* Outer loop over blocks to sample */
while (BlockSampler_HasMore(&bs))
{

I don't think it's good to move a lot of code *and* change how it is
structured in the same commit. Makes it much harder to actually see changes /
makes git blame harder to use / etc.

From 90d115c2401567be65bcf64393a6d3b39286779e Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 15:28:32 -0400
Subject: [PATCH v10 2/3] Use streaming read API in ANALYZE

The ANALYZE command prefetches and reads sample blocks chosen by a
BlockSampler algorithm. Instead of calling Prefetch|ReadBuffer() for
each block, ANALYZE now uses the streaming API introduced in b5a9b18cd0.

Author: Nazir Bilal Yavuz
Reviewed-by: Melanie Plageman
Discussion: /messages/by-id/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM+6M+mJg@mail.gmail.com
---
src/backend/commands/analyze.c | 89 ++++++++++------------------------
1 file changed, 26 insertions(+), 63 deletions(-)

That's a very nice demonstration of how this makes good prefetching easier...

From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 15:38:41 -0400
Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()

A previous commit stopped using BlockSampler_HasMore() for flow control
in acquire_sample_rows(). There seems little use now for
BlockSampler_HasMore(). It should be sufficient to return
InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()
would have returned false. Remove BlockSampler_HasMore().

Author: Melanie Plageman, Nazir Bilal Yavuz
Discussion: /messages/by-id/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM+6M+mJg@mail.gmail.com

The justification here seems somewhat odd. Sure, the previous commit stopped
using BlockSampler_HasMore in acquire_sample_rows - but only because it was
moved to block_sampling_streaming_read_next()?

Greetings,

Andres Freund

#16Melanie Plageman
melanieplageman@gmail.com
In reply to: Andres Freund (#15)
Re: Use streaming read API in ANALYZE

On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:

Hi,

On 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:

From 1dc2343661f3edb3b1bc4307afb0e956397eb76c Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 14:55:22 -0400
Subject: [PATCH v10 1/3] Make heapam_scan_analyze_next_[tuple|block] static.

27bc1772fc81 removed the table AM callbacks scan_analyze_next_block and
scan_analzye_next_tuple -- leaving their heap AM implementations only
called by acquire_sample_rows().

Ugh, I don't think 27bc1772fc81 makes much sense. But that's unrelated to this
thread. I did raise that separately
/messages/by-id/20240407214001.jgpg5q3yv33ve6y3@awork3.anarazel.de

Unless I seriously missed something, I see no alternative to reverting that
commit.

Noted. I'll give up on this refactor then. Lots of churn for no gain.
Attached v11 is just Bilal's v8 patch rebased to apply cleanly and with
a few tweaks (I changed one of the loop conditions. All other changes
are to comments and commit message).

@@ -1206,11 +1357,13 @@ acquire_sample_rows(Relation onerel, int elevel,
break;

prefetch_block = BlockSampler_Next(&prefetch_bs);
-			PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_block);
+			PrefetchBuffer(scan->rs_base.rs_rd, MAIN_FORKNUM, prefetch_block);
}
}
#endif

+ scan->rs_cbuf = InvalidBuffer;
+
/* Outer loop over blocks to sample */
while (BlockSampler_HasMore(&bs))
{

I don't think it's good to move a lot of code *and* change how it is
structured in the same commit. Makes it much harder to actually see changes /
makes git blame harder to use / etc.

Yep.

From 90d115c2401567be65bcf64393a6d3b39286779e Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 15:28:32 -0400
Subject: [PATCH v10 2/3] Use streaming read API in ANALYZE

The ANALYZE command prefetches and reads sample blocks chosen by a
BlockSampler algorithm. Instead of calling Prefetch|ReadBuffer() for
each block, ANALYZE now uses the streaming API introduced in b5a9b18cd0.

Author: Nazir Bilal Yavuz
Reviewed-by: Melanie Plageman
Discussion: /messages/by-id/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM+6M+mJg@mail.gmail.com
---
src/backend/commands/analyze.c | 89 ++++++++++------------------------
1 file changed, 26 insertions(+), 63 deletions(-)

That's a very nice demonstration of how this makes good prefetching easier...

Agreed. Yay streaming read API and Bilal!

From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 15:38:41 -0400
Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()

A previous commit stopped using BlockSampler_HasMore() for flow control
in acquire_sample_rows(). There seems little use now for
BlockSampler_HasMore(). It should be sufficient to return
InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()
would have returned false. Remove BlockSampler_HasMore().

Author: Melanie Plageman, Nazir Bilal Yavuz
Discussion: /messages/by-id/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM+6M+mJg@mail.gmail.com

The justification here seems somewhat odd. Sure, the previous commit stopped
using BlockSampler_HasMore in acquire_sample_rows - but only because it was
moved to block_sampling_streaming_read_next()?

It didn't stop using it. It stopped being useful. The reason it existed,
as far as I can tell, was to use it as the while() loop condition in
acquire_sample_rows(). I think it makes much more sense for
BlockSampler_Next() to return InvalidBlockNumber when there are no more
blocks -- not to assert you don't call it when there aren't any more
blocks.

I didn't want to change BlockSampler_Next() in the same commit as the
streaming read user and we can't remove BlockSampler_HasMore() without
changing BlockSampler_Next().

- Melanie

Attachments:

v11-0001-Use-streaming-read-API-in-ANALYZE.patchtext/x-diff; charset=us-asciiDownload+39-72
v11-0002-Obsolete-BlockSampler_HasMore.patchtext/x-diff; charset=us-asciiDownload+4-12
#17Thomas Munro
thomas.munro@gmail.com
In reply to: Melanie Plageman (#16)
Re: Use streaming read API in ANALYZE

On Mon, Apr 8, 2024 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:

src/backend/commands/analyze.c | 89 ++++++++++------------------------
1 file changed, 26 insertions(+), 63 deletions(-)

That's a very nice demonstration of how this makes good prefetching easier...

Agreed. Yay streaming read API and Bilal!

+1

I found a few comments to tweak, just a couple of places that hadn't
got the memo after we renamed "read stream", and an obsolete mention
of pinning buffers. I adjusted those directly.

I ran some tests on a random basic Linux/ARM cloud box with a 7.6GB
table, and I got:

cold hot
master: 9025ms 199ms
patched, io_combine_limit=1: 9025ms 191ms
patched, io_combine_limit=default: 8729ms 191ms

Despite being random, occasionally some I/Os must get merged, allowing
slightly better random throughput when accessing disk blocks through a
3000 IOPS drinking straw. Looking at strace, I see 29144 pread* calls
instead of 30071, which fits that theory. Let's see... if you roll a
fair 973452-sided dice 30071 times, how many times do you expect to
roll consecutive numbers? Each time you roll there is a 1/973452
chance that you get the last number + 1, and we have 30071 tries
giving 30071/973452 = ~3%. 9025ms minus 3% is 8754ms. Seems about
right.

I am not sure why the hot number is faster exactly. (Anecdotally, I
did notice that in the cases that beat master semi-unexpectedly like
this, my software memory prefetch patch doesn't help or hurt, while in
some cases and on some CPUs there is little difference, and then that
patch seems to get a speed-up like this, which might be a clue.
*Shrug*, investigation needed.)

Pushed. Thanks Bilal and reviewers!

#18Thomas Munro
thomas.munro@gmail.com
In reply to: Melanie Plageman (#16)
Re: Use streaming read API in ANALYZE

On Mon, Apr 8, 2024 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:

On 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:

From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplageman@gmail.com>
Date: Sun, 7 Apr 2024 15:38:41 -0400
Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()

A previous commit stopped using BlockSampler_HasMore() for flow control
in acquire_sample_rows(). There seems little use now for
BlockSampler_HasMore(). It should be sufficient to return
InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()
would have returned false. Remove BlockSampler_HasMore().

Author: Melanie Plageman, Nazir Bilal Yavuz
Discussion: /messages/by-id/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM+6M+mJg@mail.gmail.com

The justification here seems somewhat odd. Sure, the previous commit stopped
using BlockSampler_HasMore in acquire_sample_rows - but only because it was
moved to block_sampling_streaming_read_next()?

It didn't stop using it. It stopped being useful. The reason it existed,
as far as I can tell, was to use it as the while() loop condition in
acquire_sample_rows(). I think it makes much more sense for
BlockSampler_Next() to return InvalidBlockNumber when there are no more
blocks -- not to assert you don't call it when there aren't any more
blocks.

I didn't want to change BlockSampler_Next() in the same commit as the
streaming read user and we can't remove BlockSampler_HasMore() without
changing BlockSampler_Next().

I agree that the code looks useless if one condition implies the
other, but isn't it good to keep that cross-check, perhaps
reformulated as an assertion? I didn't look too hard at the maths, I
just saw the words "It is not obvious that this code matches Knuth's
Algorithm S ..." and realised I'm not sure I have time to develop a
good opinion about this today. So I'll leave the 0002 change out for
now, as it's a tidy-up that can easily be applied in the next cycle.

Attachments:

v12-0001-Remove-obsolete-BlockSampler_HasMore.patchtext/x-patch; charset=US-ASCII; name=v12-0001-Remove-obsolete-BlockSampler_HasMore.patchDownload+4-12
#19Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Nazir Bilal Yavuz (#10)
Re: Use streaming read API in ANALYZE

Hi,

On Wed, 3 Apr 2024 at 22:25, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

Hi,

Thank you for looking into this!

On Wed, 3 Apr 2024 at 20:17, Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:

Streaming API has been committed but the committed version has a minor
change, the read_stream_begin_relation function takes Relation instead
of BufferManagerRelation now. So, here is a v5 which addresses this
change.

I'm getting a repeatable segfault / assertion failure with this:

postgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);
CREATE TABLE
postgres=# insert into tengiga select g, repeat('x', 900) from
generate_series(1, 1400000) g;
INSERT 0 1400000
postgres=# set default_statistics_target = 10; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target = 100; ANALYZE tengiga;
SET
ANALYZE
postgres=# set default_statistics_target =1000; ANALYZE tengiga;
SET
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

TRAP: failed Assert("BufferIsValid(hscan->rs_cbuf)"), File:
"heapam_handler.c", Line: 1079, PID: 262232
postgres: heikki postgres [local]
ANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]
postgres: heikki postgres [local]
ANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]
postgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]
postgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]
postgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]
postgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]
postgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]
postgres: heikki postgres [local]
ANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]
postgres: heikki postgres [local]
ANALYZE(ProcessUtility+0x136)[0x564889f0afb1]
postgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]
postgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]
postgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]
postgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]
postgres: heikki postgres [local]
ANALYZE(PostgresMain+0x80c)[0x564889f06fd7]
postgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]
postgres: heikki postgres [local]
ANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]
postgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]
postgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]
postgres: heikki postgres [local]
ANALYZE(PostmasterMain+0x152b)[0x564889e2214d]
postgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]
/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]
postgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]
2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232)
was terminated by signal 6: Aborted

I realized the same error while working on Jakub's benchmarking results.

Cause: I was using the nblocks variable to check how many blocks will
be returned from the streaming API. But I realized that sometimes the
number returned from BlockSampler_Init() is not equal to the number of
blocks that BlockSampler_Next() will return as BlockSampling algorithm
decides how many blocks to return on the fly by using some random
seeds.

I wanted to re-check this problem and I realized that I was wrong. I
tried using nblocks again and this time there was no failure. I looked
at block sampling logic and I am pretty sure that BlockSampler_Init()
function correctly returns the number of blocks that
BlockSampler_Next() will return. It seems 158f581923 fixed this issue
as well.

--
Regards,
Nazir Bilal Yavuz
Microsoft

#20Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Thomas Munro (#17)
Re: Use streaming read API in ANALYZE

Hi,

On Mon, 8 Apr 2024 at 04:21, Thomas Munro <thomas.munro@gmail.com> wrote:

Pushed. Thanks Bilal and reviewers!

I wanted to discuss what will happen to this patch now that
27bc1772fc8 is reverted. I am continuing this thread but I can create
another thread if you prefer so.

After the revert of 27bc1772fc8, acquire_sample_rows() became table-AM
agnostic again. So, read stream changes might have to be pushed down
now but there are a couple of roadblocks like Melanie mentioned [1]/messages/by-id/CAAKRu_ZxU6hucckrT1SOJxKfyN7q-K4KU1y62GhDwLBZWG+ROg@mail.gmail.com
before.

Quote from Melanie [1]/messages/by-id/CAAKRu_ZxU6hucckrT1SOJxKfyN7q-K4KU1y62GhDwLBZWG+ROg@mail.gmail.com:

On Thu, 11 Apr 2024 at 19:19, Melanie Plageman
<melanieplageman@gmail.com> wrote:

I am working on pushing streaming ANALYZE into heap AM code, and I ran
into a few roadblocks.

If we want ANALYZE to make the ReadStream object in heap_beginscan()
(like the read stream implementation of heap sequential and TID range
scans do), I don't see any way around changing the scan_begin table AM
callback to take a BufferAccessStrategy at the least (and perhaps also
the BlockSamplerData).

read_stream_begin_relation() doesn't just save the
BufferAccessStrategy in the ReadStream, it uses it to set various
other things in the ReadStream object. callback_private_data (which in
ANALYZE's case is the BlockSamplerData) is simply saved in the
ReadStream, so it could be set later, but that doesn't sound very
clean to me.

As such, it seems like a cleaner alternative would be to add a table
AM callback for creating a read stream object that takes the
parameters of read_stream_begin_relation(). But, perhaps it is a bit
late for such additions.

If we do not want to add a new table AM callback like Melanie
mentioned, it is pretty much required to pass BufferAccessStrategy and
BlockSamplerData to the initscan().

It also opens us up to the question of whether or not sequential scan
should use such a callback instead of making the read stream object in
heap_beginscan().

I am happy to write a patch that does any of the above. But, I want to
raise these questions, because perhaps I am simply missing an obvious
alternative solution.

I wonder the same, I could not think of any alternative solution to
this problem.

Another quote from Melanie [2]/messages/by-id/CAAKRu_YkphAPNbBR2jcLqnxGhDEWTKhYfLFY=0R_oG5LHBH7Gw@mail.gmail.com in the same thread:

On Thu, 11 Apr 2024 at 20:48, Melanie Plageman
<melanieplageman@gmail.com> wrote:

I will also say that, had this been 6 months ago, I would probably
suggest we restructure ANALYZE's table AM interface to accommodate
read stream setup and to address a few other things I find odd about
the current code. For example, I think creating a scan descriptor for
the analyze scan in acquire_sample_rows() is quite odd. It seems like
it would be better done in the relation_analyze callback. The
relation_analyze callback saves some state like the callbacks for
acquire_sample_rows() and the Buffer Access Strategy. But at least in
the heap implementation, it just saves them in static variables in
analyze.c. It seems like it would be better to save them in a useful
data structure that could be accessed later. We have access to pretty
much everything we need at that point (in the relation_analyze
callback). I also think heap's implementation of
table_beginscan_analyze() doesn't need most of
heap_beginscan()/initscan(), so doing this instead of something
ANALYZE specific seems more confusing than helpful.

If we want to implement ANALYZE specific counterparts of
heap_beginscan()/initscan(); we may think of passing
BufferAccessStrategy and BlockSamplerData to them.

Also, there is an ongoing(?) discussion about a few problems /
improvements about the acquire_sample_rows() mentioned at the end of
the 'Table AM Interface Enhancements' thread [3]/messages/by-id/CAPpHfdurb9ycV8udYqM=o0sPS66PJ4RCBM1g-bBpvzUfogY0EA@mail.gmail.com. Should we wait for
these discussions to be resolved or can we resume working on this
patch?

Any kind of feedback would be appreciated.

[1]: /messages/by-id/CAAKRu_ZxU6hucckrT1SOJxKfyN7q-K4KU1y62GhDwLBZWG+ROg@mail.gmail.com
[2]: /messages/by-id/CAAKRu_YkphAPNbBR2jcLqnxGhDEWTKhYfLFY=0R_oG5LHBH7Gw@mail.gmail.com
[3]: /messages/by-id/CAPpHfdurb9ycV8udYqM=o0sPS66PJ4RCBM1g-bBpvzUfogY0EA@mail.gmail.com

--
Regards,
Nazir Bilal Yavuz
Microsoft

#21Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Nazir Bilal Yavuz (#20)
#22Melanie Plageman
melanieplageman@gmail.com
In reply to: Nazir Bilal Yavuz (#21)
#23Mats Kindahl
mats@timescale.com
In reply to: Melanie Plageman (#22)
#24Thomas Munro
thomas.munro@gmail.com
In reply to: Mats Kindahl (#23)
#25Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#24)
#26Thomas Munro
thomas.munro@gmail.com
In reply to: Mats Kindahl (#25)
#27Robert Haas
robertmhaas@gmail.com
In reply to: Thomas Munro (#26)
#28Thomas Munro
thomas.munro@gmail.com
In reply to: Robert Haas (#27)
#29Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#28)
#30Thomas Munro
thomas.munro@gmail.com
In reply to: Mats Kindahl (#29)
#31Michael Banck
michael.banck@credativ.de
In reply to: Thomas Munro (#30)
#32Thomas Munro
thomas.munro@gmail.com
In reply to: Michael Banck (#31)
#33Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#32)
#34Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#30)
#35Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#32)
#36Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#33)
#37Mats Kindahl
mats@timescale.com
In reply to: Mats Kindahl (#36)
#38Thomas Munro
thomas.munro@gmail.com
In reply to: Mats Kindahl (#37)
#39Mats Kindahl
mats@timescale.com
In reply to: Thomas Munro (#38)