Second attempt, roll your own autovacuum

Started by Glen Parkerover 19 years ago132 messageshackersgeneral
Jump to latest
#1Glen Parker
glenebob@nwlink.com
hackersgeneral

Hi all,

I am still trying to roll my own auto vacuum thingy. The goal is to
vacuum on demand in one step just like the old days, but not hit the
tables that never change (we have a lot). The idea now is to use a
combination of SQL and shell scripts to duplicate some of what auto
vacuum does. It actually doesn't seem that difficult. I have some SQL
that produces a list of tables that need vacuuming based on statistics
found in pg_stat_user_tables, and reltuples from pg_class, using the
same basic rules as auto vacuum per the documentation. So far so good.
The SQL actually produces an SQL script containing VACUUM commands,
which I can then feed back into psql. The result is a HUGE savings in
vacuum time at night.

The trouble now is, I don't see how to reset the statistics. My
assumption was that vacuum did it, but that appears to be false. How
does autovacuum do it? Can I do it with SQL?

-Glen

#2Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#1)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Glen Parker wrote:

The trouble now is, I don't see how to reset the statistics. My
assumption was that vacuum did it, but that appears to be false. How
does autovacuum do it? Can I do it with SQL?

Huh, reset what statistics? Autovacuum does not reset anything. What
statistics are you using? The number of dead tuples _should_ show as
zero on the stat system after a vacuum, certainly.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#3Glen Parker
glenebob@nwlink.com
In reply to: Alvaro Herrera (#2)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Alvaro Herrera wrote:

Glen Parker wrote:

The trouble now is, I don't see how to reset the statistics. My
assumption was that vacuum did it, but that appears to be false. How
does autovacuum do it? Can I do it with SQL?

Huh, reset what statistics? Autovacuum does not reset anything. What
statistics are you using? The number of dead tuples _should_ show as
zero on the stat system after a vacuum, certainly.

pg_stat_user_tables.[n_tup_ins|n_tup_upd|n_tup_del]. Hmm maybe I'm
doing this all wrong then. Is there a way to find the estimated dead
tuples from SQL, the same number autovacuum looks at?

-Glen

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Glen Parker (#1)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Glen Parker <glenebob@nwlink.com> writes:

I am still trying to roll my own auto vacuum thingy.

Um, is this purely for hack value? What is it that you find inadequate
about regular autovacuum? It is configurable through the pg_autovacuum
catalog --- which I'd be the first to agree is a sucky user interface,
but we're not going to set the user interface in concrete until we are
pretty confident it's feature-complete. So: what do you see missing?

regards, tom lane

#5Csaba Nagy
nagy@ecircle-ag.com
In reply to: Tom Lane (#4)
hackersgeneral
Re: Second attempt, roll your own autovacuum

On Tue, 2006-12-19 at 07:28, Tom Lane wrote:

Glen Parker <glenebob@nwlink.com> writes:

I am still trying to roll my own auto vacuum thingy.

Um, is this purely for hack value? What is it that you find inadequate
about regular autovacuum? It is configurable through the pg_autovacuum
catalog --- which I'd be the first to agree is a sucky user interface,
but we're not going to set the user interface in concrete until we are
pretty confident it's feature-complete. So: what do you see missing?

I'm not sure what the OP had in mind, but the thing which is missing for
us is a time window restriction sort of thing. What I mean is to make
sure a vacuum will never kick in in the main business hours, but only at
night at pre-specified hours, and only if the vacuum threshold was met
for the delete/update counts.

It would be nice if there could be a flexible time window specification,
like specifying only some days, or only weekends, or each night some
specific hours... but just one time window would be a big improvement
already.

Cheers,
Csaba.

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#3)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Glen Parker wrote:

Alvaro Herrera wrote:

Glen Parker wrote:

The trouble now is, I don't see how to reset the statistics. My
assumption was that vacuum did it, but that appears to be false. How
does autovacuum do it? Can I do it with SQL?

Huh, reset what statistics? Autovacuum does not reset anything. What
statistics are you using? The number of dead tuples _should_ show as
zero on the stat system after a vacuum, certainly.

pg_stat_user_tables.[n_tup_ins|n_tup_upd|n_tup_del]. Hmm maybe I'm
doing this all wrong then. Is there a way to find the estimated dead
tuples from SQL, the same number autovacuum looks at?

Hmm, I thought the number of dead tuples was being exposed in
pg_stat_user_tables but evidently not. I think this is an oversight
which we could "fix" in 8.3. (For a current release I guess you could
install your own function, it shouldn't be too difficult to code it).

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#7Chris Browne
cbbrowne@acm.org
In reply to: Glen Parker (#1)
hackersgeneral
Re: Second attempt, roll your own autovacuum

In an attempt to throw the authorities off his trail, tgl@sss.pgh.pa.us (Tom Lane) transmitted:

Glen Parker <glenebob@nwlink.com> writes:

I am still trying to roll my own auto vacuum thingy.

Um, is this purely for hack value? What is it that you find inadequate
about regular autovacuum? It is configurable through the pg_autovacuum
catalog --- which I'd be the first to agree is a sucky user interface,
but we're not going to set the user interface in concrete until we are
pretty confident it's feature-complete. So: what do you see missing?

I think that about a year ago I proposed a more sophisticated approach
to autovacuum; one part of it was to set up a "request queue," a table
where vacuum requests would get added.

There's some "producer" side stuff:

- There could be tables you want to vacuum exceedingly frequently;
those could get added periodically via something shaped like cron.

- One could ask for all the tables in a given database to be added to
the queue, so as to mean that all tables would get vacuumed every so
often.

- You might even inject requests 'quasi-manually', asking for the
queue to do work on particular tables.

There's some "policy side" stuff:

- Rules might be put in place to eliminate certain tables from the
queue, providing some intelligence as to what oughtn't get vacuumed

Then there's the "consumer":

- The obvious "dumb" approach is simply to have one connection that
runs through the queue, pulling the eldest entry, vacuuming, and
marking it done.

- The obvious extension is that if a table is listed multiple times in
the queue, it only need be processed once.

- There might be time-based exclusions to the effect that large tables
oughtn't be processed during certain periods (backup time?)

- One might have *two* consumers, one that will only process small
tables, so that those little, frequently updated tables can get
handled quickly, and another consumer that does larger tables.
Or perhaps that knows that it's fine, between 04:00 and 09:00 UTC,
to have 6 consumers, and blow through a lot of larger tables
simultaneously.

After all, changes in 8.2 mean that concurrent vacuums don't block
one another from cleaning out dead content.

I went as far as scripting up the simplest form of this, with
"injector" and queue and the "dumb consumer." Gave up because it
wasn't that much better than what we already had.
--
output = reverse("moc.liamg" "@" "enworbbc")
http://linuxfinances.info/info/
Minds, like parachutes, only function when they are open.

#8Csaba Nagy
nagy@ecircle-ag.com
In reply to: Chris Browne (#7)
hackersgeneral
Re: Second attempt, roll your own autovacuum

From my POV, autovacuum is doing a very good job, with the exception of:

- There might be time-based exclusions to the effect that large tables
oughtn't be processed during certain periods (backup time?)

Either (per table!) exception or permission based control of when a
table can be vacuumed is needed to avoid vacuuming big tables during
peek business periods. While this can be alleviated by setting lower
vacuum cost settings, and it won't block anymore other vacuums, it will
still need the multiple vacuum stuff to still process small tables:

- One might have *two* consumers, one that will only process small
tables, so that those little, frequently updated tables can get
handled quickly, and another consumer that does larger tables.
Or perhaps that knows that it's fine, between 04:00 and 09:00 UTC,
to have 6 consumers, and blow through a lot of larger tables
simultaneously.

So one of the 2 might be enough. I guess time-based
exclusion/permissions are not that easy to implement, and also not easy
to set up properly... so what could work well is:

- allow a "priority" setting per table in pg_autovacuum;
- create a vacuum thread for each priority;
- each thread checks it's own tables to be processed based on the
priority setting from pg_autovacuum;
- there have to be a default priority for tables not explicitly set up
in pg_autovacuum;
- possibly set a per priority default vacuum cost and delay;

In 8.2 the different vacuum threads for the different priorities won't
step on each other toes, and the default settings for the priorities can
be used to create some easily manageable settings for vacuuming table
categories with different update/delete patterns.

There could be some preset priorities, but creating new ones would be
useful so the user can create one per table update/delete pattern.

Maybe priority is not the best word for this, but I can't think now on
other better...

Cheers,
Csaba.

#9Richard Huxton
dev@archonet.com
In reply to: Csaba Nagy (#8)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Csaba Nagy wrote:

- One might have *two* consumers, one that will only process small
tables, so that those little, frequently updated tables can get
handled quickly, and another consumer that does larger tables.
Or perhaps that knows that it's fine, between 04:00 and 09:00 UTC,
to have 6 consumers, and blow through a lot of larger tables
simultaneously.

So one of the 2 might be enough. I guess time-based
exclusion/permissions are not that easy to implement, and also not easy
to set up properly... so what could work well is:

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

--
Richard Huxton
Archonet Ltd

#10Csaba Nagy
nagy@ecircle-ag.com
In reply to: Richard Huxton (#9)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

No, this is definitely not enough. The problem scenario is when
autovacuum starts vacuuming a huge table and that keeps it busy 10 hours
and in the meantime the small but frequently updated tables get awfully
bloated...

The only solution to that is to have multiple vacuums running in
parallel, and it would be really nice if those multiple vacuums would be
coordinated by autovacuum too...

Cheers,
Csaba.

#11Richard Huxton
dev@archonet.com
In reply to: Csaba Nagy (#10)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Csaba Nagy wrote:

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

No, this is definitely not enough. The problem scenario is when
autovacuum starts vacuuming a huge table and that keeps it busy 10 hours
and in the meantime the small but frequently updated tables get awfully
bloated...

Ah (lightbulb goes on)! I see what you mean now.

--
Richard Huxton
Archonet Ltd

#12Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Csaba Nagy (#10)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Csaba Nagy wrote:

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

No, this is definitely not enough. The problem scenario is when
autovacuum starts vacuuming a huge table and that keeps it busy 10 hours
and in the meantime the small but frequently updated tables get awfully
bloated...

The only solution to that is to have multiple vacuums running in
parallel, and it would be really nice if those multiple vacuums would be
coordinated by autovacuum too...

Yes, I agree, having multiple "autovacuum workers" would be useful.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#13Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#12)
hackersgeneral
Autovacuum Improvements (was: Second attempt, roll your own autovacuum)

Alvaro Herrera wrote:

Csaba Nagy wrote:

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

No, this is definitely not enough. The problem scenario is when
autovacuum starts vacuuming a huge table and that keeps it busy 10 hours
and in the meantime the small but frequently updated tables get awfully
bloated...

The only solution to that is to have multiple vacuums running in
parallel, and it would be really nice if those multiple vacuums would be
coordinated by autovacuum too...

Yes, I agree, having multiple "autovacuum workers" would be useful.

Bruce, I think there are a couple of items here that might be worth
adding to the TODO list.

1) Allow multiple "autovacuum workers": Currently Autovacuum is only
capable of ordering one vacuum command at a time, for most work loads
this is sufficient but falls down when a hot (very actively updated
table) goes unvacuumed for a long period of time because a large table
is currently being worked on.

2) Once we can have multiple autovacuum workers: Create the concept of
hot tables that require more attention and should never be ignored for
more that X minutes, perhaps have one "autovacuum worker" per hot table?
(What do people think of this?)

3) Create "Maintenance Windows" for autovacuum: Currently autovacuum
makes all of it's decisions based on a single per-table threshold value,
maintenance windows would allow the setting of a per-window, per-table
threshold. This makes it possible to, for example, forbid (or strongly
discourage) autovacuum from doing maintenance work during normal
business hours either for the entire system or for specific tables.

None of those three items are on the todo list, however I think there is
general consensus that they (at least 1 & 3) are good ideas.

#14Bruce Momjian
bruce@momjian.us
In reply to: Matthew T. O'Connor (#13)
hackersgeneral
Re: Autovacuum Improvements (was: Second attempt,

Yes, I think there are these TODO items. I was waiting to see what
additional replies there are before adding them.

---------------------------------------------------------------------------

Matthew O'Connor wrote:

Alvaro Herrera wrote:

Csaba Nagy wrote:

Alternatively, perhaps a threshold so that a table is only considered
for vacuum if:
(table-size * overall-activity-in-last-hour) < threshold
Ideally you'd define your units appropriately so that you could just
define threshold in postgresql.conf as 30% (of peak activity in last 100
hours say).

No, this is definitely not enough. The problem scenario is when
autovacuum starts vacuuming a huge table and that keeps it busy 10 hours
and in the meantime the small but frequently updated tables get awfully
bloated...

The only solution to that is to have multiple vacuums running in
parallel, and it would be really nice if those multiple vacuums would be
coordinated by autovacuum too...

Yes, I agree, having multiple "autovacuum workers" would be useful.

Bruce, I think there are a couple of items here that might be worth
adding to the TODO list.

1) Allow multiple "autovacuum workers": Currently Autovacuum is only
capable of ordering one vacuum command at a time, for most work loads
this is sufficient but falls down when a hot (very actively updated
table) goes unvacuumed for a long period of time because a large table
is currently being worked on.

2) Once we can have multiple autovacuum workers: Create the concept of
hot tables that require more attention and should never be ignored for
more that X minutes, perhaps have one "autovacuum worker" per hot table?
(What do people think of this?)

3) Create "Maintenance Windows" for autovacuum: Currently autovacuum
makes all of it's decisions based on a single per-table threshold value,
maintenance windows would allow the setting of a per-window, per-table
threshold. This makes it possible to, for example, forbid (or strongly
discourage) autovacuum from doing maintenance work during normal
business hours either for the entire system or for specific tables.

None of those three items are on the todo list, however I think there is
general consensus that they (at least 1 & 3) are good ideas.

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

http://archives.postgresql.org/

--
Bruce Momjian bruce@momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#15Matthew T. O'Connor
matthew@zeut.net
In reply to: Bruce Momjian (#14)
hackersgeneral
Re: Autovacuum Improvements

Bruce Momjian wrote:

Yes, I think there are these TODO items. I was waiting to see what
additional replies there are before adding them.

Speaking of which, I was just looking at the TODO at:

http://www.postgresql.org/docs/faqs.TODO.html

and I think this item:

* Improve xid wraparound detection by recording per-table rather than
per-database

is done and working in 8.2 no?

#16Chris Browne
cbbrowne@acm.org
In reply to: Glen Parker (#1)
hackersgeneral
Re: Autovacuum Improvements

matthew@zeut.net ("Matthew O'Connor") writes:

2) Once we can have multiple autovacuum workers: Create the concept of
hot tables that require more attention and should never be ignored for
more that X minutes, perhaps have one "autovacuum worker" per hot
table? (What do people think of this?)

One worker per "hot table" seems like overkill to me; you could chew
up a lot of connections that way, which could be a DOS.

That you have a "foot gun" is guaranteed; I think I'd rather that it
come in the form that choosing the "hot list" badly hurts the rate of
vacuuming than that we have a potential to chew up numbers of
connections (which is a relatively non-renewable resource).
--
(format nil "~S@~S" "cbbrowne" "cbbrowne.com")
http://linuxdatabases.info/info/
There are no "civil aviation for dummies" books out there and most of
you would probably be scared and spend a lot of your time looking up
if there was one. :-) -- Jordan Hubbard in c.u.b.f.m

#17Chris Browne
cbbrowne@acm.org
In reply to: Bruce Momjian (#14)
hackersgeneral
Re: Autovacuum Improvements

matthew@zeut.net ("Matthew O'Connor") writes:

Bruce Momjian wrote:

Yes, I think there are these TODO items. I was waiting to see what
additional replies there are before adding them.

Speaking of which, I was just looking at the TODO at:

http://www.postgresql.org/docs/faqs.TODO.html

and I think this item:

* Improve xid wraparound detection by recording per-table rather than
per-database

is done and working in 8.2 no?

That's in the 8.2 release notes:

- Track maximum XID age within individual tables, instead of whole
databases (Alvaro)

This reduces the overhead involved in preventing transaction ID
wraparound, by avoiding unnecessary VACUUMs.
--
output = reverse("ofni.sesabatadxunil" "@" "enworbbc")
http://www3.sympatico.ca/cbbrowne/linux.html
The human race will decree from time to time: "There is something at
which it is absolutely forbidden to laugh."
-- Nietzche on Common Lisp

#18Glen Parker
glenebob@nwlink.com
In reply to: Tom Lane (#4)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Tom Lane wrote:

Glen Parker <glenebob@nwlink.com> writes:

I am still trying to roll my own auto vacuum thingy.

Um, is this purely for hack value?

Don't be silly ;-)
Honestly I sort of thought the problem was fairly obvious.

What is it that you find inadequate
about regular autovacuum? It is configurable through the pg_autovacuum
catalog --- which I'd be the first to agree is a sucky user interface,
but we're not going to set the user interface in concrete until we are
pretty confident it's feature-complete. So: what do you see missing?

Traditional vacuum does every table in the DB, which is absolutely The
Wrong Thing for us. Vacuum can be fired against individual tables, but
then how do I know which tables need it? Autovacuum is smart about
which tables it hits, but exceedingly stupid about *when* it hits them.

What I want is a way to do all needed vacuuming, in as short a time span
as possible, when I decide it should be done. For us, that's between ~2
AM and ~3 AM each morning. If a vacuum runs past 3 AM, so be it, but
it's better to hit it hard and try to be done by 3 AM than it is to
lolly gag around about it unil 5 AM.

The obvious answer for me is to vacuum all the tables that autovacuum
would hit, but only on demand. Something like "VACUUM CONDITIONAL WHERE
autovacuum_says_so()" :-)

-Glen

#19Matthew T. O'Connor
matthew@zeut.net
In reply to: Glen Parker (#18)
hackersgeneral
Re: Second attempt, roll your own autovacuum

Glen Parker wrote:

Tom Lane wrote:

What is it that you find inadequate
about regular autovacuum? It is configurable through the pg_autovacuum
catalog --- which I'd be the first to agree is a sucky user interface,
but we're not going to set the user interface in concrete until we are
pretty confident it's feature-complete. So: what do you see missing?

Traditional vacuum does every table in the DB, which is absolutely The
Wrong Thing for us. Vacuum can be fired against individual tables, but
then how do I know which tables need it? Autovacuum is smart about
which tables it hits, but exceedingly stupid about *when* it hits them.

What I want is a way to do all needed vacuuming, in as short a time span
as possible, when I decide it should be done. For us, that's between ~2
AM and ~3 AM each morning. If a vacuum runs past 3 AM, so be it, but
it's better to hit it hard and try to be done by 3 AM than it is to
lolly gag around about it unil 5 AM.

The obvious answer for me is to vacuum all the tables that autovacuum
would hit, but only on demand. Something like "VACUUM CONDITIONAL WHERE
autovacuum_says_so()" :-)

I believe the correct answer to this problems is to write a cron script
that enables autovacuum at 2AM and disables it at 3AM. I think there is
some talk of this in the archives somewhere.

If you need to hit specific tables more often than that, then you can
have another cron script that vacuums a table ever hour or something or
something along those lines.

#20Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Chris Browne (#17)
hackersgeneral
Re: Autovacuum Improvements

Chris Browne wrote:

matthew@zeut.net ("Matthew O'Connor") writes:

Bruce Momjian wrote:

Yes, I think there are these TODO items. I was waiting to see what
additional replies there are before adding them.

Speaking of which, I was just looking at the TODO at:

http://www.postgresql.org/docs/faqs.TODO.html

and I think this item:

* Improve xid wraparound detection by recording per-table rather than
per-database

is done and working in 8.2 no?

That's in the 8.2 release notes:

- Track maximum XID age within individual tables, instead of whole
databases (Alvaro)

This reduces the overhead involved in preventing transaction ID
wraparound, by avoiding unnecessary VACUUMs.

Yeah, this is what the TODO item was about, so it certainly is done.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#21Glen Parker
glenebob@nwlink.com
In reply to: Matthew T. O'Connor (#13)
hackersgeneral
#22Matthew T. O'Connor
matthew@zeut.net
In reply to: Glen Parker (#21)
hackersgeneral
#23Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Matthew T. O'Connor (#22)
hackersgeneral
#24Erik Jones
erik@myemma.com
In reply to: Matthew T. O'Connor (#22)
hackersgeneral
#25Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#20)
hackersgeneral
#26Glen Parker
glenebob@nwlink.com
In reply to: Erik Jones (#24)
hackersgeneral
#27Glen Parker
glenebob@nwlink.com
In reply to: Alvaro Herrera (#23)
hackersgeneral
#28Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#27)
hackersgeneral
#29Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#23)
hackersgeneral
#30Matthew T. O'Connor
matthew@zeut.net
In reply to: Glen Parker (#26)
hackersgeneral
#31Brandon Aiken
BAiken@winemantech.com
In reply to: Matthew T. O'Connor (#30)
hackersgeneral
#32Matthew T. O'Connor
matthew@zeut.net
In reply to: Brandon Aiken (#31)
hackersgeneral
#33Glen Parker
glenebob@nwlink.com
In reply to: Matthew T. O'Connor (#32)
hackersgeneral
#34Matthew T. O'Connor
matthew@zeut.net
In reply to: Glen Parker (#33)
hackersgeneral
#35Bruce Momjian
bruce@momjian.us
In reply to: Matthew T. O'Connor (#34)
hackersgeneral
#36Glen Parker
glenebob@nwlink.com
In reply to: Bruce Momjian (#35)
hackersgeneral
#37Glen Parker
glenebob@nwlink.com
In reply to: Matthew T. O'Connor (#34)
hackersgeneral
#38Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#36)
hackersgeneral
#39Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#37)
hackersgeneral
#40Matthew T. O'Connor
matthew@zeut.net
In reply to: Glen Parker (#36)
hackersgeneral
#41Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#39)
hackersgeneral
#42Glen Parker
glenebob@nwlink.com
In reply to: Alvaro Herrera (#38)
hackersgeneral
#43Glen Parker
glenebob@nwlink.com
In reply to: Brandon Aiken (#31)
hackersgeneral
#44Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Glen Parker (#42)
hackersgeneral
#45Joris Dobbelsteen
Joris@familiedobbelsteen.nl
In reply to: Alvaro Herrera (#44)
general
#46Peter Childs
peterachilds@gmail.com
In reply to: Chris Browne (#16)
hackersgeneral
#47Alban Hertroys
alban@magproductions.nl
In reply to: Glen Parker (#43)
hackersgeneral
#48Glen Parker
glenebob@nwlink.com
In reply to: Joris Dobbelsteen (#45)
general
#49Russell Smith
mr-russ@pws.com.au
In reply to: Alvaro Herrera (#23)
hackersgeneral
#50Glen Parker
glenebob@nwlink.com
In reply to: Alvaro Herrera (#23)
hackersgeneral
#51Matthew T. O'Connor
matthew@zeut.net
In reply to: Russell Smith (#49)
hackersgeneral
#52Csaba Nagy
nagy@ecircle-ag.com
In reply to: Matthew T. O'Connor (#51)
hackersgeneral
#53Scott Ribe
scott_ribe@killerbytes.com
In reply to: Alvaro Herrera (#23)
hackersgeneral
#54Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Csaba Nagy (#52)
hackersgeneral
#55Csaba Nagy
nagy@ecircle-ag.com
In reply to: Alvaro Herrera (#54)
hackersgeneral
#56Chris Browne
cbbrowne@acm.org
In reply to: Richard Huxton (#9)
hackersgeneral
#57Scott Ribe
scott_ribe@killerbytes.com
In reply to: Chris Browne (#56)
hackersgeneral
#58Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Chris Browne (#56)
hackersgeneral
#59Chris Browne
cbbrowne@acm.org
In reply to: Matthew T. O'Connor (#13)
hackersgeneral
#60Csaba Nagy
nagy@ecircle-ag.com
In reply to: Chris Browne (#56)
hackersgeneral
#61Chris Browne
cbbrowne@acm.org
In reply to: Csaba Nagy (#10)
hackersgeneral
#62Joris Dobbelsteen
Joris@familiedobbelsteen.nl
In reply to: Chris Browne (#61)
general
#63Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Joris Dobbelsteen (#62)
general
#64Csaba Nagy
nagy@ecircle-ag.com
In reply to: Chris Browne (#61)
hackersgeneral
#65Joris Dobbelsteen
Joris@familiedobbelsteen.nl
In reply to: Csaba Nagy (#64)
general
#66Matthew T. O'Connor
matthew@zeut.net
In reply to: Csaba Nagy (#64)
hackersgeneral
#67Csaba Nagy
nagy@ecircle-ag.com
In reply to: Matthew T. O'Connor (#66)
hackersgeneral
#68Csaba Nagy
nagy@ecircle-ag.com
In reply to: Csaba Nagy (#67)
hackersgeneral
#69Matthew T. O'Connor
matthew@zeut.net
In reply to: Csaba Nagy (#68)
hackersgeneral
#70Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Joris Dobbelsteen (#65)
general
#71Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#70)
general
#72Joris Dobbelsteen
Joris@familiedobbelsteen.nl
In reply to: Matthew T. O'Connor (#71)
general
#73Chris Browne
cbbrowne@acm.org
In reply to: Matthew T. O'Connor (#13)
hackersgeneral
#74Simon Riggs
simon@2ndQuadrant.com
In reply to: Alvaro Herrera (#58)
hackersgeneral
#75Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Simon Riggs (#74)
hackersgeneral
#76Simon Riggs
simon@2ndQuadrant.com
In reply to: Alvaro Herrera (#75)
hackersgeneral
#77Chris Browne
cbbrowne@acm.org
In reply to: Matthew T. O'Connor (#22)
hackersgeneral
#78elein
elein@varlena.com
In reply to: Alvaro Herrera (#75)
hackersgeneral
#79Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Simon Riggs (#74)
hackersgeneral
#80Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Simon Riggs (#74)
hackersgeneral
#81Alban Hertroys
alban@magproductions.nl
In reply to: Pavan Deolasee (#79)
hackersgeneral
#82Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Simon Riggs (#76)
hackersgeneral
#83Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: elein (#78)
hackersgeneral
#84Darcy Buskermolen
darcy@ok-connect.com
In reply to: Alvaro Herrera (#83)
hackersgeneral
#85Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#83)
hackersgeneral
#86Ron Mayer
rm_pg@cheapcomplexdevices.com
In reply to: Matthew T. O'Connor (#85)
hackersgeneral
#87Simon Riggs
simon@2ndQuadrant.com
In reply to: Darcy Buskermolen (#84)
hackersgeneral
#88Darcy Buskermolen
darcyb@commandprompt.com
In reply to: Simon Riggs (#87)
hackersgeneral
#89Bruce Momjian
bruce@momjian.us
In reply to: Darcy Buskermolen (#88)
hackersgeneral
#90Russell Smith
mr-russ@pws.com.au
In reply to: Darcy Buskermolen (#88)
hackersgeneral
#91Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Russell Smith (#90)
hackersgeneral
#92Simon Riggs
simon@2ndQuadrant.com
In reply to: Russell Smith (#90)
hackersgeneral
#93Martijn van Oosterhout
kleptog@svana.org
In reply to: Simon Riggs (#92)
hackersgeneral
#94Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Heikki Linnakangas (#91)
hackersgeneral
#95Simon Riggs
simon@2ndQuadrant.com
In reply to: Jim Nasby (#94)
hackersgeneral
#96org@kewlstuff.co.za
org@kewlstuff.co.za
In reply to: Russell Smith (#49)
hackersgeneral
#97Russell Smith
mr-russ@pws.com.au
In reply to: org@kewlstuff.co.za (#96)
hackersgeneral
#98Peter Eisentraut
peter_e@gmx.net
In reply to: org@kewlstuff.co.za (#96)
hackersgeneral
#99Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Russell Smith (#90)
hackersgeneral
#100Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#99)
hackersgeneral
#101Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Bruce Momjian (#100)
hackersgeneral
#102Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#101)
hackersgeneral
#103Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#102)
hackersgeneral
#104Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#102)
hackersgeneral
#105Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#104)
hackersgeneral
#106Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#105)
hackersgeneral
#107Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#105)
hackersgeneral
#108Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Simon Riggs (#107)
hackersgeneral
#109Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#105)
hackersgeneral
#110Richard Huxton
dev@archonet.com
In reply to: Bruce Momjian (#105)
hackersgeneral
#111Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Bruce Momjian (#104)
hackersgeneral
#112Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Kenneth Marshall (#108)
hackersgeneral
#113Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Heikki Linnakangas (#112)
hackersgeneral
#114Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#109)
hackersgeneral
#115Ron Mayer
rm_pg@cheapcomplexdevices.com
In reply to: Bruce Momjian (#104)
hackersgeneral
#116Joris Dobbelsteen
Joris@familiedobbelsteen.nl
In reply to: Bruce Momjian (#105)
hackersgeneral
#117Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Ron Mayer (#115)
hackersgeneral
#118Steve Atkins
steve@blighty.com
In reply to: Richard Huxton (#110)
hackersgeneral
#119Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Jim Nasby (#117)
hackersgeneral
In reply to: Bruce Momjian (#104)
hackersgeneral
#121org@kewlstuff.co.za
org@kewlstuff.co.za
In reply to: Russell Smith (#49)
hackersgeneral
#122Tino Wildenhain
tino@wildenhain.de
In reply to: org@kewlstuff.co.za (#121)
hackersgeneral
#123Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Kenneth Marshall (#119)
hackersgeneral
#124Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Jim Nasby (#123)
hackersgeneral
#125org@kewlstuff.co.za
org@kewlstuff.co.za
In reply to: Russell Smith (#49)
hackersgeneral
#126Markus Wanner
markus@bluegap.ch
In reply to: org@kewlstuff.co.za (#125)
hackersgeneral
#127Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kenneth Marshall (#124)
hackersgeneral
#128org@kewlstuff.co.za
org@kewlstuff.co.za
In reply to: Russell Smith (#49)
hackersgeneral
#129Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Tom Lane (#127)
hackersgeneral
#130Bruce Momjian
bruce@momjian.us
In reply to: Jim Nasby (#117)
hackersgeneral
#131Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#130)
hackersgeneral
#132Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#131)
hackersgeneral