Autovacuum and OldestXmin

Started by Simon Riggsabout 18 years ago41 messages
#1Simon Riggs
simon@2ndquadrant.com

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

It seems easy to add (another, groan) column onto pg_stat_user_tables to
record the oldestxmin when it was last vacuumed. (last_autovacuum_xmin)

That will avoid pointless VACUUMs for all users (in 8.4).

Strangely HOT does this at the page level to avoid useless work, yet
stranger still VACUUM doesn't evaluate PageIsPrunable() at all and
always scans each page regardless.

Why isn't VACUUM optimised the same way HOT is?
Why doesn't VACUUM continue onto the next block when !PageIsPrunable().
Nothing is documented though it seems "obvious" that it should.

Perhaps an integration oversight?

[Also there is a comment saying "this is a bug" in autovacuum.c
Are we thinking to go production with that phrase in the code?]

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#2Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Simon Riggs (#1)
Re: Autovacuum and OldestXmin

Simon Riggs wrote:

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

Hmm, I think it's just because nobody suggested it and I didn't came up
with the idea.

Whether it's a useful thing to do is a different matter. Why store it
per table and not more widely? Perhaps per database would be just as
useful; and maybe it would allow us to skip running autovac workers
when there is no point in doing so.

Why isn't VACUUM optimised the same way HOT is?
Why doesn't VACUUM continue onto the next block when !PageIsPrunable().
Nothing is documented though it seems "obvious" that it should.

Perhaps an integration oversight?

Yeah.

[Also there is a comment saying "this is a bug" in autovacuum.c
Are we thinking to go production with that phrase in the code?]

Yeah, well, it's only a comment ;-) The problem is that a worker can
decide that a table needs to be vacuumed, if another worker has finished
vacuuming it in the last 500 ms. I proposed a mechanism to close the
hole but it was too much of a hassle.

Maybe we could remove the comment for the final release? :-)

--
Alvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1", W 73� 13' 56.4"
Management by consensus: I have decided; you concede.
(Leonard Liu)

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#1)
Re: Autovacuum and OldestXmin

Simon Riggs <simon@2ndquadrant.com> writes:

Why isn't VACUUM optimised the same way HOT is?

It doesn't do the same things HOT does.

regards, tom lane

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#2)
Re: Autovacuum and OldestXmin

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

Simon Riggs wrote:

[Also there is a comment saying "this is a bug" in autovacuum.c
Are we thinking to go production with that phrase in the code?]

Yeah, well, it's only a comment ;-) The problem is that a worker can
decide that a table needs to be vacuumed, if another worker has finished
vacuuming it in the last 500 ms. I proposed a mechanism to close the
hole but it was too much of a hassle.

Maybe we could remove the comment for the final release? :-)

What, you think we should try to hide our shortcomings? There are
hundreds of XXX and FIXME comments in the sources.

regards, tom lane

#5Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#3)
Re: Autovacuum and OldestXmin

On Thu, 2007-11-22 at 13:21 -0500, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

Why isn't VACUUM optimised the same way HOT is?

It doesn't do the same things HOT does.

Thanks for the enlightenment :-)

Clearly much of the code in heap_page_prune_opt() differs, yet the test
for if (!PageIsPrunable(...)) could be repeated inside the main block
scan loop in lazy_scan_heap().

My thought-experiment:

- a long running transaction is in progress
- HOT cleans a block and then the block is not touched for a while, the
total of all uncleanable updates cause a VACUUM to be triggered, which
then scans the table, sees the block and scans the block again
because...

a) it could have checked !PageIsPrunable(), but didn't

b) it is important that it attempt to clean the block again for
reason...?

Seems like the thought experiment could occur frequently.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#6Simon Riggs
simon@2ndquadrant.com
In reply to: Alvaro Herrera (#2)
Re: Autovacuum and OldestXmin

On Thu, 2007-11-22 at 15:20 -0300, Alvaro Herrera wrote:

Simon Riggs wrote:

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

Hmm, I think it's just because nobody suggested it and I didn't came up
with the idea.

OK, well, me neither :-(

...and I never thought to look at slony before now.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#7Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Simon Riggs (#5)
Re: Autovacuum and OldestXmin

Simon Riggs wrote:

On Thu, 2007-11-22 at 13:21 -0500, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

Why isn't VACUUM optimised the same way HOT is?

It doesn't do the same things HOT does.

Thanks for the enlightenment :-)

Clearly much of the code in heap_page_prune_opt() differs, yet the test
for if (!PageIsPrunable(...)) could be repeated inside the main block
scan loop in lazy_scan_heap().

My thought-experiment:

- a long running transaction is in progress
- HOT cleans a block and then the block is not touched for a while, the
total of all uncleanable updates cause a VACUUM to be triggered, which
then scans the table, sees the block and scans the block again
because...

a) it could have checked !PageIsPrunable(), but didn't

b) it is important that it attempt to clean the block again for
reason...?

There might be dead tuples left over by aborted INSERTs, for example,
which don't set the Prunable-flag.

Even if we could use PageIsPrunable, it would be a bad thing from a
robustness point of view. If we ever failed to set the Prunable-flag on
a page for some reason, VACUUM would never remove the dead tuples.

Besides, I don't remember anyone complaining about VACUUM's CPU usage,
so it doesn't really matter.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#8Simon Riggs
simon@2ndquadrant.com
In reply to: Heikki Linnakangas (#7)
Re: Autovacuum and OldestXmin

On Thu, 2007-11-22 at 19:02 +0000, Heikki Linnakangas wrote:

Even if we could use PageIsPrunable, it would be a bad thing from a
robustness point of view. If we ever failed to set the Prunable-flag on
a page for some reason, VACUUM would never remove the dead tuples.

That's a killer reason, I suppose. I was really trying to uncover what
the thinking was, so we can document it. Having VACUUM ignore it
completely seems wrong.

Besides, I don't remember anyone complaining about VACUUM's CPU usage,
so it doesn't really matter.

Recall anybody saying how much they love it? ;-)

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#9Christopher Browne
cbbrowne@acm.org
In reply to: Simon Riggs (#1)
Re: Autovacuum and OldestXmin

The world rejoiced as alvherre@alvh.no-ip.org (Alvaro Herrera) wrote:

Simon Riggs wrote:

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

Hmm, I think it's just because nobody suggested it and I didn't came up
with the idea.

Whether it's a useful thing to do is a different matter. Why store it
per table and not more widely? Perhaps per database would be just as
useful; and maybe it would allow us to skip running autovac workers
when there is no point in doing so.

I think I need to take blame for that feature in Slony-I ;-).

I imagine it might be useful to add it to autovac, too. I thought it
was pretty neat that this could be successfully handled by comparison
with a single value (e.g. - eldest xmin), and I expect that using a
single quasi-global value should be good enough for autovac.

If there is some elderly, long-running transaction that isn't a
VACUUM, that will indeed inhibit VACUUM from doing any good, globally,
across the cluster, until such time as that transaction ends.

To, at that point, "inhibit" autovac from bothering to run VACUUM,
would seem like a good move. There is still value to running ANALYZE
on tables, so it doesn't warrant stopping autovac altogether, but this
scenario suggests a case for suppressing futile vacuuming, at least...
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne&gt; rate me
http://linuxfinances.info/info/slony.html
It's hard to tell if someone is inconspicuous.

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#8)
Re: Autovacuum and OldestXmin

Simon Riggs <simon@2ndquadrant.com> writes:

That's a killer reason, I suppose. I was really trying to uncover what
the thinking was, so we can document it. Having VACUUM ignore it
completely seems wrong.

What you seem to be forgetting is that VACUUM is charged with cleaning
out LP_DEAD tuples, which HOT cannot do. And the page header fields are
set (quite properly so) with HOT's interests in mind not VACUUM's.

regards, tom lane

#11Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#10)
Re: Autovacuum and OldestXmin

On Fri, 2007-11-23 at 01:14 -0500, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

That's a killer reason, I suppose. I was really trying to uncover what
the thinking was, so we can document it. Having VACUUM ignore it
completely seems wrong.

What you seem to be forgetting is that VACUUM is charged with cleaning
out LP_DEAD tuples, which HOT cannot do. And the page header fields are
set (quite properly so) with HOT's interests in mind not VACUUM's.

OK, thanks.

Me getting confused about HOT might cause a few chuckles and it does
with me also. You didn't sit through the months of detailed discussions
of all the many possible ways of doing it (granted all were flawed in
some respect), so I figure I will need to forget those before I
understand the one exact way of doing it that has been committed.
Anyway, thanks for keeping me on track and (again) kudos to Pavan and
team.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#12Simon Riggs
simon@2ndquadrant.com
In reply to: Christopher Browne (#9)
Re: Autovacuum and OldestXmin

On Thu, 2007-11-22 at 21:59 -0500, Christopher Browne wrote:

The world rejoiced as alvherre@alvh.no-ip.org (Alvaro Herrera) wrote:

Simon Riggs wrote:

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

Hmm, I think it's just because nobody suggested it and I didn't came up
with the idea.

Whether it's a useful thing to do is a different matter. Why store it
per table and not more widely? Perhaps per database would be just as
useful; and maybe it would allow us to skip running autovac workers
when there is no point in doing so.

I think I need to take blame for that feature in Slony-I ;-).

Good thinking.

I imagine it might be useful to add it to autovac, too. I thought it
was pretty neat that this could be successfully handled by comparison
with a single value (e.g. - eldest xmin), and I expect that using a
single quasi-global value should be good enough for autovac.

I've just looked at that to see if it is that easy; I don't think it is.

That works for slony currently because we vacuum all of the slony tables
at once. Autovacuum does individual tables so we'd need to store the
individual values otherwise we might skip doing a VACUUM when it could
have done some useful work.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#13Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Simon Riggs (#12)
Re: Autovacuum and OldestXmin

Simon Riggs wrote:

On Thu, 2007-11-22 at 21:59 -0500, Christopher Browne wrote:

I imagine it might be useful to add it to autovac, too. I thought it
was pretty neat that this could be successfully handled by comparison
with a single value (e.g. - eldest xmin), and I expect that using a
single quasi-global value should be good enough for autovac.

I've just looked at that to see if it is that easy; I don't think it is.

That works for slony currently because we vacuum all of the slony tables
at once. Autovacuum does individual tables so we'd need to store the
individual values otherwise we might skip doing a VACUUM when it could
have done some useful work.

Yeah, that was my conclusion too.

--
Alvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC
Voy a acabar con todos los humanos / con los humanos yo acabar�
voy a acabar con todos / con todos los humanos acabar� (Bender)

#14Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Replacement Selection

Hi to all.

I'm new. I'd like to integrate my code into PostgreSQL. It's the
implementation of some refinements of Replacement Selection algorithm used
for External Sorting.
I have got some issue and preferibly I'd like to be supported by some
developers that have something to do with it.

Who can I talk to?

Thanks for your attentions.
Good Luck!

Manolo.

#15Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Noname (#14)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

I'm new. I'd like to integrate my code into PostgreSQL. It's the
implementation of some refinements of Replacement Selection algorithm
used for External Sorting.
I have got some issue and preferibly I'd like to be supported by some
developers that have something to do with it.

Who can I talk to?

This mailing list is the right place to discuss that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#16Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

Thanks for your support.

I downloaded the source code of the last stable version of PostgreSQL. Where
can I find the part related to the External Sorting algorithm (supposed to
be Replacement Selection)?
I mean, which is the file to be studied and/or modified and/or substituted?

Thanks for your attention.

--------------------------------------------------
From: "Heikki Linnakangas" <heikki@enterprisedb.com>
Sent: Monday, November 26, 2007 1:35 PM
To: <mac_man2005@hotmail.it>
Cc: <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Replacement Selection

Show quoted text

mac_man2005@hotmail.it wrote:

I'm new. I'd like to integrate my code into PostgreSQL. It's the
implementation of some refinements of Replacement Selection algorithm
used for External Sorting.
I have got some issue and preferibly I'd like to be supported by some
developers that have something to do with it.

Who can I talk to?

This mailing list is the right place to discuss that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

#17Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Noname (#16)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

Thanks for your support.

I downloaded the source code of the last stable version of PostgreSQL.
Where can I find the part related to the External Sorting algorithm
(supposed to be Replacement Selection)?
I mean, which is the file to be studied and/or modified and/or substituted?

src/backend/utils/sort/tuplesort.c

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"I would rather have GNU than GNOT." (ccchips, lwn.net/Articles/37595/)

#18Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Noname (#16)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

I downloaded the source code of the last stable version of PostgreSQL.
Where can I find the part related to the External Sorting algorithm
(supposed to be Replacement Selection)?
I mean, which is the file to be studied and/or modified and/or substituted?

In src/backend/utils/sort/tuplesort.c. The comments at the top of that
file is a good place to start.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#19Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

Ok guys!
Thanks for your help.

Unfortunately I'm lost into the code... any good soul helping me to
understand what should be the precise part to be modified?

Thanks for your time!

--------------------------------------------------
From: "Heikki Linnakangas" <heikki@enterprisedb.com>
Sent: Monday, November 26, 2007 2:34 PM
To: <mac_man2005@hotmail.it>
Cc: <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Replacement Selection

Show quoted text

mac_man2005@hotmail.it wrote:

I downloaded the source code of the last stable version of PostgreSQL.
Where can I find the part related to the External Sorting algorithm
(supposed to be Replacement Selection)?
I mean, which is the file to be studied and/or modified and/or
substituted?

In src/backend/utils/sort/tuplesort.c. The comments at the top of that
file is a good place to start.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

#20Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Noname (#19)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

Ok guys!
Thanks for your help.

Unfortunately I'm lost into the code... any good soul helping me to
understand what should be the precise part to be modified?

I think you should print the file and read it several times until you
understand what's going on. Then you can start thinking where and how
to modify it.

--
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
"Oh, great altar of passive entertainment, bestow upon me thy discordant images
at such speed as to render linear thought impossible" (Calvin a la TV)

#21Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Noname (#19)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

Unfortunately I'm lost into the code... any good soul helping me to
understand what should be the precise part to be modified?

You haven't given any details on what you're trying to do. What are you
trying to do?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#20)
Re: Replacement Selection

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

mac_man2005@hotmail.it wrote:

Unfortunately I'm lost into the code... any good soul helping me to
understand what should be the precise part to be modified?

I think you should print the file and read it several times until you
understand what's going on. Then you can start thinking where and how
to modify it.

Also, go find a copy of Knuth volume 3, because a whole lot of the
comments assume you've read Knuth's discussion of external sorting.

regards, tom lane

#23Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

Sorry.

I'm trying to integrate my code into PostgreSQL. At the moment I have got my
working code, with my own main() etc etc.
The code is supposed to perform run generation during external sorting.
That's all, my code won't do any mergesort. Just run generation.

I'm studing the code and I don't know where to put my code into. Which part
I need to substitute and which other are absolutely "untouchables".
I admit I'm not an excellent programmer. I've always been writing my own
codes, simple codes. Now I have got some ideas that can possibly help
postgreSQL to get better. And for the first time I'm to integrate code into
others code. I say it just to apologize in case some things that could be
obvious for someone else, maybe are not for me.

Anyway... back to work.
My code has the following structure.

1) Generates a random input stream to sort.
As for this part, i just generate an integer input stream, not a stream of
db records. I talk about stream because I'm in a general case in which the
input source can be unknown and we cannot even know how much elements to
sort

2)Fill the available memory with the first M elements from stream. They will
be arranged into an heap structure.

3) Start run generation. As for this phase, I see PostgreSQL code (as Knuth
algorithm) marks elements belonging to runs in otder to know which run they
belong to and to know when the current heap has finished building the
current run. I don't memorize this kind of info. I just output from heap to
run all of the elements going into the current run. The elements supposed to
go into the next run (I call them "dead records") are still stored into main
memory, but as leaves of the heap. This implies reducing the heap size and
so heapifying a smaller number of elements each time I get a dead record
(it's not necessary to sort dead records). When the heap size is zero a new
run is created heapifying all the dead records currently present into main
memory.

I haven't seen something similar into tuplesort.c, apparently no heapify is
called no new run created and stuff like this.
Do you see any parallelism between PostgreSQL code with what I said in the
previous points?

Thanks for your attention.

--------------------------------------------------
From: "Heikki Linnakangas" <heikki@enterprisedb.com>
Sent: Monday, November 26, 2007 5:42 PM
To: <mac_man2005@hotmail.it>
Cc: <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Replacement Selection

Show quoted text

mac_man2005@hotmail.it wrote:

Unfortunately I'm lost into the code... any good soul helping me to
understand what should be the precise part to be modified?

You haven't given any details on what you're trying to do. What are you
trying to do?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noname (#23)
Re: Replacement Selection

<mac_man2005@hotmail.it> writes:

3) Start run generation. As for this phase, I see PostgreSQL code (as Knuth
algorithm) marks elements belonging to runs in otder to know which run they
belong to and to know when the current heap has finished building the
current run. I don't memorize this kind of info. I just output from heap to
run all of the elements going into the current run. The elements supposed to
go into the next run (I call them "dead records") are still stored into main
memory, but as leaves of the heap. This implies reducing the heap size and
so heapifying a smaller number of elements each time I get a dead record
(it's not necessary to sort dead records). When the heap size is zero a new
run is created heapifying all the dead records currently present into main
memory.

Why would this be an improvement over Knuth? AFAICS you can't generate
longer runs this way, and it's not saving any time --- in fact it's
costing time, because re-heapifying adds a lot of new comparisons.

regards, tom lane

#25Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

I must precise that it's not the improvement. Other more complex algorithms
correspond to the refinements, but at the moment I just want to know which
part of PostgreSQL code does what. I also implemented Replacement Selection
(RS) so if I'm able to integrate my RS I hope I would be able to integrate
the others too.

Anyway, even in my RS implementation a longer run is created. The first M
initialization elements will surely form part of the current run. M is the
memory size so at least a run sized M will be created. After initialization,
the elements are not suddenly output, but an element from heap is output
into run as soon as I get an element from stream. In other words, for each
element from stream, the root element of the heap is output, and the input
element takes the root place into the heap. If that element is a "good
record" I just heapify (since the element will be placed at the now free
root place). If that input element is a dead record I swap it with the last
leaf and reduce the heap size.

--------------------------------------------------
From: "Tom Lane" <tgl@sss.pgh.pa.us>
Sent: Monday, November 26, 2007 7:31 PM
To: <mac_man2005@hotmail.it>
Cc: <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Replacement Selection

Show quoted text

<mac_man2005@hotmail.it> writes:

3) Start run generation. As for this phase, I see PostgreSQL code (as
Knuth
algorithm) marks elements belonging to runs in otder to know which run
they
belong to and to know when the current heap has finished building the
current run. I don't memorize this kind of info. I just output from heap
to
run all of the elements going into the current run. The elements supposed
to
go into the next run (I call them "dead records") are still stored into
main
memory, but as leaves of the heap. This implies reducing the heap size
and
so heapifying a smaller number of elements each time I get a dead record
(it's not necessary to sort dead records). When the heap size is zero a
new
run is created heapifying all the dead records currently present into
main
memory.

Why would this be an improvement over Knuth? AFAICS you can't generate
longer runs this way, and it's not saving any time --- in fact it's
costing time, because re-heapifying adds a lot of new comparisons.

regards, tom lane

#26Timothy J. Kordas
tkordas@greenplum.com
In reply to: Noname (#25)
Re: Replacement Selection

mac_man2005@hotmail.it wrote:

I also implemented
Replacement Selection (RS) so if I'm able to integrate my RS I hope I
would be able to integrate the others too.

The existing code implements RS. Tom asked you to describe what improvements
you hope to make; I'm confident that he already understands how to implement
RS. :-)

**

Why don't you compile with TRACE_SORT enabled and watch the log output.

The function in tuplesort.c that you should start with is puttuple_common().

in puttuple_common(), the transition from an internal to external sort is
performed at the bottom of the TSS_INITIAL case in the main switch
statement. The function dumptuples() heapifies the in-core tuples (divides
the in-core tuples into initial runs and then advances the state to
TSS_BUILDRUNS). All subsequent tuples will hit the TSS_BUILDRUNS case and
will insert tuples into the heap; emitting tuples for the current run as it
goes.

I recommend you run the code in the debugger on a external-sorting query:
watch two or three tuples go into the heap and you'll get the idea.

The top of the heap is at state->memtuples[0] the heap goes down from there.
New tuples are added there and the heap is adjusted (Using the
tuplesort_heap_siftup() function).

-Tim

#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noname (#25)
Re: Replacement Selection

<mac_man2005@hotmail.it> writes:

Anyway, even in my RS implementation a longer run is created. The first M
initialization elements will surely form part of the current run. M is the
memory size so at least a run sized M will be created. After initialization,
the elements are not suddenly output, but an element from heap is output
into run as soon as I get an element from stream. In other words, for each
element from stream, the root element of the heap is output, and the input
element takes the root place into the heap. If that element is a "good
record" I just heapify (since the element will be placed at the now free
root place). If that input element is a dead record I swap it with the last
leaf and reduce the heap size.

AFAICS that produces runs that are *exactly* the same length as Knuth's
method --- you're just using a different technique for detecting when
the run is over, to wit "record is not in heap" vs "record is in heap
but with a higher run number". I guess you would save some comparisons
while the heap is shrinking, but it's not at all clear that you'd save
more than what it will cost you to re-heapify all the dead records once
the run is over.

regards, tom lane

#28Gregory Stark
stark@enterprisedb.com
In reply to: Tom Lane (#27)
Re: Replacement Selection

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

AFAICS that produces runs that are *exactly* the same length as Knuth's
method --- you're just using a different technique for detecting when
the run is over, to wit "record is not in heap" vs "record is in heap
but with a higher run number". I guess you would save some comparisons
while the heap is shrinking, but it's not at all clear that you'd save
more than what it will cost you to re-heapify all the dead records once
the run is over.

This sounded familiar... It sounds a lot like what this CVS log message is
describing as a mistaken idea:

revision 1.2
date: 1999-10-30 18:27:15 +0100; author: tgl; state: Exp; lines: +423 -191;

Further performance improvements in sorting: reduce number of comparisons
during initial run formation by keeping both current run and next-run tuples
in the same heap (yup, Knuth is smarter than I am). And, during merge
passes, make use of available sort memory to load multiple tuples from any
one input 'tape' at a time, thereby improving locality of access to the temp
file.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning

#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gregory Stark (#28)
Re: Replacement Selection

Gregory Stark <stark@enterprisedb.com> writes:

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

I guess you would save some comparisons
while the heap is shrinking, but it's not at all clear that you'd save
more than what it will cost you to re-heapify all the dead records once
the run is over.

This sounded familiar... It sounds a lot like what this CVS log message is
describing as a mistaken idea:

Wow, I had forgotten all about that; but yeah this sounds exactly like
my first-cut rewrite of PG's sorting back in 1999. I have some vague
memory of having dismissed Knuth's approach as being silly because of
the extra space and (small number of) cycles needed to compare run
numbers in the heap. I hadn't realized that there was an impact on
total number of comparisons required :-(

The discussion from that time period in pgsql-hackers makes it sound
like you need a large test case to notice the problem, though.

regards, tom lane

#30Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

Hi to all.

It seems a previous mail of mine with following body hasn't been sent.
Sorry for possibly getting it twice.

Actually I have now modified that body, so it's worth to read it once again.

Thanks for your attention.
Regards.

------------PREVIOUS MAIL--------------------------
Well, the refinements are the followings:

Using 2 heaps instead of just one:
one heap creating a "descending" run and the
other one creating an "ascending" run.
Both associated to the same "logical" run.

Suppose we want the input elements to be finally sorted in an ascending
order. To do this we could QuickSort the first M initialization elements
into RAM
and then divide it into 2 parts.
Suppose the first heap creates the following run:
10
9
8

And suppose the second heap creates the following run:
3
5
7

Those two runs can be seen as just one by mergesort... since they "could" be
physically merged into one single run: at first we could write the elements
3,5,7 and then the elements of the other run, red upside down.

Possible advantages:
Having two heaps of that kinds lets RS better adapt to local variations of
the input trend.
This technique can be called Two Ways Replacement Selection (2WRS) just
because of those 2 heaps.
As an extreme example, we can say that having the input already sort in
reverse order
no more leads us to the worst case: with 2WRS no matter the input is already
sort
in ascending/descending order... in this case we'll produce just one run
instead
of producing the maximum number of runs as in RS worst case (input in
reverse order).
Moreover it lets us to grow the current run in 2 ways: just imagine we would
output runs
in a regular file. With 2WRS this could be seen as start outputting elements
from the middle
of such a regular file, the descending heap outputting elements from the
middle upwards
while the ascending one outputting from the middle downward. This could
imply getting
a smaller number of "dead records" (as I said in previous mails, a dear
record is an element
that won't form part of the current run) and so having longer runs.

Others optimizations, for example, can be done with the "virtual
concatenation" technique:
storing a cache of couples (first_element,last_element) for each created
run. This
could be useful in case we can find 2 couples (first_element_1,
last_element_1) and
(first_element_2, last_element_2) with last_element_1 <= first_element_2.
In this case, those runs too can be seen as belonging to the same "logical
run"
(actually they are 2 RS different physical runs, or even 4 in 2WRS
but can be seen as just one by mergesort). Of course, once those 2 (or 4)
runs are
logically merged into that only one, this last one in turn could be merged
to other runs.

What does all that imply? Mergesort would actually consider a smaller number
of runs
(since it should just work on logical runs). This means less jumps between
runs on disk.

Now... to test those refinements I should integrate my code into
PostgreSQL... but it's not that easy for me...

Thanks for your attention.
------------PREVIOUS MAIL--------------------------

#31Simon Riggs
simon@2ndquadrant.com
In reply to: Noname (#30)
Re: Replacement Selection

On Tue, 2007-11-27 at 09:25 +0100, mac_man2005@hotmail.it wrote:

Others optimizations, for example, can be done with the "virtual
concatenation" technique:
storing a cache of couples (first_element,last_element) for each created
run. This
could be useful in case we can find 2 couples (first_element_1,
last_element_1) and
(first_element_2, last_element_2) with last_element_1 <= first_element_2.
In this case, those runs too can be seen as belonging to the same "logical
run"
(actually they are 2 RS different physical runs, or even 4 in 2WRS
but can be seen as just one by mergesort). Of course, once those 2 (or 4)
runs are
logically merged into that only one, this last one in turn could be merged
to other runs.

What does all that imply? Mergesort would actually consider a smaller number
of runs
(since it should just work on logical runs). This means less jumps between
runs on disk.

That's actually a refinement of an idea I've been working on for
optimizing sort. I'll post those separately.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#32Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

Any comment about Two Ways Replacement Selection (two heaps instead of just
one) ?

--------------------------------------------------
From: "Simon Riggs" <simon@2ndquadrant.com>
Sent: Tuesday, November 27, 2007 1:03 PM
To: <mac_man2005@hotmail.it>
Cc: <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Replacement Selection

Show quoted text

On Tue, 2007-11-27 at 09:25 +0100, mac_man2005@hotmail.it wrote:

Others optimizations, for example, can be done with the "virtual
concatenation" technique:
storing a cache of couples (first_element,last_element) for each created
run. This
could be useful in case we can find 2 couples (first_element_1,
last_element_1) and
(first_element_2, last_element_2) with last_element_1 <=
first_element_2.
In this case, those runs too can be seen as belonging to the same
"logical
run"
(actually they are 2 RS different physical runs, or even 4 in 2WRS
but can be seen as just one by mergesort). Of course, once those 2 (or 4)
runs are
logically merged into that only one, this last one in turn could be
merged
to other runs.

What does all that imply? Mergesort would actually consider a smaller
number
of runs
(since it should just work on logical runs). This means less jumps
between
runs on disk.

That's actually a refinement of an idea I've been working on for
optimizing sort. I'll post those separately.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate

#33Simon Riggs
simon@2ndquadrant.com
In reply to: Noname (#32)
Re: Replacement Selection

On Tue, 2007-11-27 at 17:49 +0100, mac_man2005@hotmail.it wrote:

Any comment about Two Ways Replacement Selection (two heaps instead of just
one) ?

It might allow dynamic heap size management more easily than with a
single heap.

If you really think it will be better, try it. You'll learn loads, right
or wrong. It's difficult to forecast ahead of time what's a good idea
and what's a bad idea. The real truth of these things is that you need
to pop the hood and start tinkering and its's quite hard to make a plan
for that. If you have a bad idea, just move on to the next one; they're
just ideas.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#34Noname
mac_man2005@hotmail.it
In reply to: Simon Riggs (#1)
Re: Replacement Selection

in puttuple_common(), the transition from an internal to external sort is
performed at the bottom of the TSS_INITIAL case in the main switch
statement.

The transition? Do we internal sort somewhere else and then external sort
here in tuplesort.c?

The function dumptuples() heapifies the in-core tuples (divides the
in-core tuples into initial runs and then advances the state to
TSS_BUILDRUNS).

Cannot see where dumptuples() "advances the state to TSS_BUILDRUNS".
I expected something like
state->status = TSS_BUILDRUNS;
executed through dumptuples()

Show quoted text

I recommend you run the code in the debugger on a external-sorting query:
watch two or three tuples go into the heap and you'll get the idea.

The top of the heap is at state->memtuples[0] the heap goes down from
there. New tuples are added there and the heap is adjusted (Using the
tuplesort_heap_siftup() function).

-Tim

#35Gregory Stark
stark@enterprisedb.com
In reply to: Noname (#34)
Re: Replacement Selection

<mac_man2005@hotmail.it> writes:

The function dumptuples() heapifies the in-core tuples (divides the in-core
tuples into initial runs and then advances the state to TSS_BUILDRUNS).

Cannot see where dumptuples() "advances the state to TSS_BUILDRUNS".
I expected something like
state->status = TSS_BUILDRUNS;
executed through dumptuples()

There's only one "state->status = TSS_BUILDRUNS" in the whole file. It's
called by inittapes which is called in one place, just before dumptuples.
Seriously, please try a bit harder before giving up.

The code in this file is quite interdependent which means you'll have to read
through the whole file (except perhaps the last section which just contains
the interface functions to feed different types of datums or tuples) to
understand any of it.

But it's quite self-contained which makes it one of the easier modules in the
system to get a functional grasp of. The hard part is understanding the
algorithm itself and working out the details of the array management.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

#36Manolo _
mac_man2005@hotmail.it
In reply to: Timothy J. Kordas (#26)
Compiling PG on linux

I'm trying to compile PG on Ubuntu in order to hack tuplesort.c code
I just downloaded and unpacked the source code and red README and INSTALL files.

I'm going to

./configure --enable-debug --enable-cassert --enable-depend

then I would

make
make install

Can I improve something adding some missing option/command to the above steps?
Where and how to apply the TRACE_SORT option?
Any other useful options?

Sorry, I'm not so expert on Linux/PostgreSQL/gcc/make etc etc.

Thanks for your time.

----------------------------------------

Date: Mon, 26 Nov 2007 11:09:54 -0800
From: tkordas@greenplum.com
To: mac_man2005@hotmail.it
CC: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Replacement Selection

mac_man2005@hotmail.it wrote:

I also implemented
Replacement Selection (RS) so if I'm able to integrate my RS I hope I
would be able to integrate the others too.

The existing code implements RS. Tom asked you to describe what improvements
you hope to make; I'm confident that he already understands how to implement
RS. :-)

**

Why don't you compile with TRACE_SORT enabled and watch the log output.

The function in tuplesort.c that you should start with is puttuple_common().

in puttuple_common(), the transition from an internal to external sort is
performed at the bottom of the TSS_INITIAL case in the main switch
statement. The function dumptuples() heapifies the in-core tuples (divides
the in-core tuples into initial runs and then advances the state to
TSS_BUILDRUNS). All subsequent tuples will hit the TSS_BUILDRUNS case and
will insert tuples into the heap; emitting tuples for the current run as it
goes.

I recommend you run the code in the debugger on a external-sorting query:
watch two or three tuples go into the heap and you'll get the idea.

The top of the heap is at state->memtuples[0] the heap goes down from there.
New tuples are added there and the heap is adjusted (Using the
tuplesort_heap_siftup() function).

-Tim

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

#37Joshua D. Drake
jd@commandprompt.com
In reply to: Manolo _ (#36)
Re: Compiling PG on linux

Manolo _ wrote:

I'm trying to compile PG on Ubuntu in order to hack tuplesort.c code
I just downloaded and unpacked the source code and red README and INSTALL files.

I'm going to

./configure --enable-debug --enable-cassert --enable-depend

then I would

make
make install

Can I improve something adding some missing option/command to the above steps?
Where and how to apply the TRACE_SORT option?
Any other useful options?

You don't want --enable-cassert on a production machine it is a
performance hit.

Joshua D. Drake

#38Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Manolo _ (#36)
Re: Compiling PG on linux

Manolo _ wrote:

I'm trying to compile PG on Ubuntu in order to hack tuplesort.c code
I just downloaded and unpacked the source code and red README and INSTALL files.

I'm going to

./configure --enable-debug --enable-cassert --enable-depend

then I would

make
make install

Can I improve something adding some missing option/command to the above steps?

Maybe you would want to change -O2 to -O0 in CFLAGS so that debugging is
easier (you will eventually need it).

Where and how to apply the TRACE_SORT option?

Use pg_config_manual.h.

--
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
"Si quieres ser creativo, aprende el arte de perder el tiempo"

#39Andrew Dunstan
andrew@dunslane.net
In reply to: Joshua D. Drake (#37)
Re: Compiling PG on linux

Joshua D. Drake wrote:

Manolo _ wrote:

./configure --enable-debug --enable-cassert --enable-depend

You don't want --enable-cassert on a production machine it is a
performance hit.

He's clearly not setting up for production, but for development, where
cassert is quite appropriate.

cheers

andrew

#40Bruce Momjian
bruce@momjian.us
In reply to: Christopher Browne (#9)
Re: Autovacuum and OldestXmin

Added to TODO:

o Prevent autovacuum from running if an old transaction is still
running from the last vacuum

http://archives.postgresql.org/pgsql-hackers/2007-11/msg00899.php

---------------------------------------------------------------------------

Christopher Browne wrote:

The world rejoiced as alvherre@alvh.no-ip.org (Alvaro Herrera) wrote:

Simon Riggs wrote:

I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.

AFAICS autovacuum does not do this, or did I miss that?

Hmm, I think it's just because nobody suggested it and I didn't came up
with the idea.

Whether it's a useful thing to do is a different matter. Why store it
per table and not more widely? Perhaps per database would be just as
useful; and maybe it would allow us to skip running autovac workers
when there is no point in doing so.

I think I need to take blame for that feature in Slony-I ;-).

I imagine it might be useful to add it to autovac, too. I thought it
was pretty neat that this could be successfully handled by comparison
with a single value (e.g. - eldest xmin), and I expect that using a
single quasi-global value should be good enough for autovac.

If there is some elderly, long-running transaction that isn't a
VACUUM, that will indeed inhibit VACUUM from doing any good, globally,
across the cluster, until such time as that transaction ends.

To, at that point, "inhibit" autovac from bothering to run VACUUM,
would seem like a good move. There is still value to running ANALYZE
on tables, so it doesn't warrant stopping autovac altogether, but this
scenario suggests a case for suppressing futile vacuuming, at least...
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne&gt; rate me
http://linuxfinances.info/info/slony.html
It's hard to tell if someone is inconspicuous.

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://postgres.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#41Alvaro Herrera
alvherre@commandprompt.com
In reply to: Alvaro Herrera (#2)
Re: Autovacuum and OldestXmin

Alvaro Herrera wrote:

Simon Riggs wrote:

[Also there is a comment saying "this is a bug" in autovacuum.c
Are we thinking to go production with that phrase in the code?]

Yeah, well, it's only a comment ;-) The problem is that a worker can
decide that a table needs to be vacuumed, if another worker has finished
vacuuming it in the last 500 ms. I proposed a mechanism to close the
hole but it was too much of a hassle.

I just committed a patch that should fix this problem.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support