Re: CPU-intensive autovacuuming

Started by Phil Endecottalmost 21 years ago111 messageshackersgeneral
Jump to latest
#1Phil Endecott
spam_from_postgresql_general@chezphil.org
hackersgeneral

Following up on my own post from last night:

Could it be that there is some code in autovacuum that is O(n^2) in
the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j < PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs->table_list);
while (tbl_elem != NULL)
{

I haven't really tried to understand what is going on in here, but it
does look like it is getting the result of the "pg_class join stats"
query and then matching it up against its internal list of tables using
nested loops, which is undoubtedly O(n^2) in the number of tables.

Have I correctly understood what is going on here?

--Phil.

#2Matthew T. O'Connor
matthew@zeut.net
In reply to: Phil Endecott (#1)
hackersgeneral

Phil Endecott wrote:

Following up on my own post from last night:

Could it be that there is some code in autovacuum that is O(n^2) in
the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j < PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs->table_list);
while (tbl_elem != NULL)
{

I haven't really tried to understand what is going on in here, but it
does look like it is getting the result of the "pg_class join stats"
query and then matching it up against its internal list of tables
using nested loops, which is undoubtedly O(n^2) in the number of tables.

Have I correctly understood what is going on here?

Indeed you have. I have head a few similar reports but perhaps none as
bad as yours. One person put a small sleep value so that it doesn't
spin so tight. You could also just up the sleep delay so that it
doesn't do this work quite so often. No other quick suggestions.

#3Phil Endecott
spam_from_postgresql_general@chezphil.org
In reply to: Matthew T. O'Connor (#2)
hackersgeneral

Matthew T. O'Connor wrote:

Phil Endecott wrote:

Could it be that there is some code in autovacuum that is O(n^2) in
the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j < PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs->table_list);
while (tbl_elem != NULL)
{
Have I correctly understood what is going on here?

Indeed you have. I have head a few similar reports but perhaps none as
bad as yours. One person put a small sleep value so that it doesn't
spin so tight. You could also just up the sleep delay so that it
doesn't do this work quite so often. No other quick suggestions.

I do wonder why autovacuum is keeping its table list in memory rather
than in the database.

But given that it is keeping it in memory, I think the real fix is to
sort that list (or keep it ordered when building or updating it). It is
trivial to also get the query results ordered, and they can then be
compared in O(n) time.

I notice various other places where there seem to be nested loops, e.g.
in the update_table_list function. I'm not sure if they can be fixed by
similar means.

--Phil.

#4Matthew T. O'Connor
matthew@zeut.net
In reply to: Phil Endecott (#3)
hackersgeneral

Phil Endecott wrote:

Matthew T. O'Connor wrote:

Indeed you have. I have head a few similar reports but perhaps none
as bad as yours. One person put a small sleep value so that it
doesn't spin so tight. You could also just up the sleep delay so
that it doesn't do this work quite so often. No other quick
suggestions.

I do wonder why autovacuum is keeping its table list in memory rather
than in the database.

For better or worse, this was a conscious design decision that the
contrib version of autovacuum be non-invasive to your database.

But given that it is keeping it in memory, I think the real fix is to
sort that list (or keep it ordered when building or updating it). It
is trivial to also get the query results ordered, and they can then be
compared in O(n) time.

I'm quite sure there is a better way, please submit a patch if you can.
This was never a real concern for most people since the number of tables
is typically small enough not to be a problem. The integrated version
of autovacuum that didn't make the cut before 8.0 avoids this problem
since the autovacuum data is stored in the database.

I notice various other places where there seem to be nested loops,
e.g. in the update_table_list function. I'm not sure if they can be
fixed by similar means.

I would think so, they all basically do the same type of loop.

#5Phil Endecott
spam_from_postgresql_general@chezphil.org
In reply to: Matthew T. O'Connor (#4)
hackersgeneral

Matthew T. O'Connor wrote:

The integrated version
of autovacuum that didn't make the cut before 8.0 avoids this problem
since the autovacuum data is stored in the database.

What is the status of this? Is it something that will be included in
8.1 or 8.0.n? I might be able to patch the current code but that
doesn't seem like a useful thing to do if a better solution will arrive
eventually. I am currently running vacuums from a cron job and I think
I will be happy with that for the time being.

(Incidentally, I have also found that the indexes on my pg_attributes
table were taking up over half a gigabyte, which came down to less than
40 megs after reindexing them. Is there a case for having autovacuum
also call reindex?)

--Phil.

#6Matthew T. O'Connor
matthew@zeut.net
In reply to: Phil Endecott (#5)
hackersgeneral

Phil Endecott wrote:

Matthew T. O'Connor wrote:

The integrated version of autovacuum that didn't make the cut before
8.0 avoids this problem since the autovacuum data is stored in the
database.

What is the status of this? Is it something that will be included in
8.1 or 8.0.n? I might be able to patch the current code but that
doesn't seem like a useful thing to do if a better solution will
arrive eventually. I am currently running vacuums from a cron job and
I think I will be happy with that for the time being.

This is a good question :-) I have been so busy with work lately that I
have not been able to work on it. I am currently trying to resurrect
the patch I sent in for 8.0 and update it so that it applies against
HEAD. Once that is done, I will need help from someone with the
portions of the work that I'm not comfortable / capable of. The main
issue with the version I created during the 8.0 devel cycle it used
libpq to connect, query and issue commands against the databases. This
was deemed bad, and I need help setting up the infrastructure to make
this happen without libpq. I hope to have my patch applying against
HEAD sometime this week but it probably won't happen till next week.

So the summary of the autovacuum integration status is that we are fast
running out of time (feature freeze July 1), and I have very little time
to devote to this task. So you might want to submit your O(n) patch
cause unfortunately it looks like integrated autovacuum might slip
another release unless someone else steps up to work on it.

(Incidentally, I have also found that the indexes on my pg_attributes
table were taking up over half a gigabyte, which came down to less
than 40 megs after reindexing them. Is there a case for having
autovacuum also call reindex?)

Yes there is certainly some merit to having autovacuum or something
similar perform other system maintenance tasks such as reindexing. I
just haven't taken it there yet. It does seem strange that your
pg_attributes table go that big, anyone have any insight here? You did
say you are using 7.4.2, I forget it that has the index reclaiming code
in vacuum, also there are some autovacuum bugs in the early 7.4.x
releases. You might try to upgrade to either 8.0.x or a later 7.4.x
release.

Matthew O'Connor

#7Bruce Momjian
bruce@momjian.us
In reply to: Phil Endecott (#5)
hackersgeneral

Phil Endecott wrote:

Matthew T. O'Connor wrote:

The integrated version
of autovacuum that didn't make the cut before 8.0 avoids this problem
since the autovacuum data is stored in the database.

What is the status of this? Is it something that will be included in
8.1 or 8.0.n? I might be able to patch the current code but that
doesn't seem like a useful thing to do if a better solution will arrive
eventually. I am currently running vacuums from a cron job and I think
I will be happy with that for the time being.

I will post about integrating pg_autovacuum into the backend for 8.1 in
a few minutes.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Phil Endecott (#5)
hackersgeneral

Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:

(Incidentally, I have also found that the indexes on my pg_attributes
table were taking up over half a gigabyte, which came down to less than
40 megs after reindexing them. Is there a case for having autovacuum
also call reindex?)

Lots of temp tables I suppose? If so that's not autovacuum's fault;
it wasn't getting told about the activity in pg_attribute until this
patch:

2005-03-31 18:20 tgl

* src/backend/postmaster/: pgstat.c (REL7_4_STABLE), pgstat.c
(REL8_0_STABLE), pgstat.c: Flush any remaining statistics counts
out to the collector at process exit. Without this, operations
triggered during backend exit (such as temp table deletions) won't
be counted ... which given heavy usage of temp tables can lead to
pg_autovacuum falling way behind on the need to vacuum pg_class and
pg_attribute. Per reports from Steve Crawford and others.

Unless the bloat occurred after you updated to 8.0.2, there's no issue.

regards, tom lane

#9Thomas F.O'Connell
tfo@sitening.com
In reply to: Phil Endecott (#3)
hackersgeneral

Phil,

If you complete this patch, I'm very interested to see it.

I think I'm the person Matthew is talking about who inserted a sleep
value. Because of the sheer number of tables involved, even small
values of sleep caused pg_autovacuum to iterate too slowly over its
table lists to be of use in a production environment (where I still
find its behavior to be preferable to a complicated list of manual
vacuums performed in cron).

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 7, 2005, at 6:16 AM, Phil Endecott wrote:

Show quoted text

Matthew T. O'Connor wrote:

Phil Endecott wrote:

Could it be that there is some code in autovacuum that is O

(n^2) in

the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j < PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs->table_list);
while (tbl_elem != NULL)
{ Have I correctly understood what is going on here?

Indeed you have. I have head a few similar reports but perhaps
none as bad as yours. One person put a small sleep value so that
it doesn't spin so tight. You could also just up the sleep delay
so that it doesn't do this work quite so often. No other quick
suggestions.

I do wonder why autovacuum is keeping its table list in memory
rather than in the database.

But given that it is keeping it in memory, I think the real fix is
to sort that list (or keep it ordered when building or updating
it). It is trivial to also get the query results ordered, and they
can then be compared in O(n) time.

I notice various other places where there seem to be nested loops,
e.g. in the update_table_list function. I'm not sure if they can
be fixed by similar means.

--Phil.

#10Shelby Cain
alyandon@yahoo.com
In reply to: Thomas F.O'Connell (#9)
hackersgeneral
--- "Thomas F. O'Connell" <tfo@sitening.com> wrote:

Phil,

If you complete this patch, I'm very interested to see it.

I think I'm the person Matthew is talking about who inserted a sleep

value. Because of the sheer number of tables involved, even small
values of sleep caused pg_autovacuum to iterate too slowly over its
table lists to be of use in a production environment (where I still
find its behavior to be preferable to a complicated list of manual
vacuums performed in cron).

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Were you sleeping every time through the loop? How about something
like:

if (j%500 == 1) usleep(100000)

Regards,

Shelby Cain

__________________________________
Discover Yahoo!
Stay in touch with email, IM, photo sharing and more. Check it out!
http://discover.yahoo.com/stayintouch.html

#11Thomas F.O'Connell
tfo@sitening.com
In reply to: Shelby Cain (#10)
hackersgeneral

I was usleeping in tiny increments in each iteration of the loop. I
didn't try break it into iterative groups like this.

Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather
than O(n^2) table activity. At this point, though, I'm probably not
too likely to have much time to hack pg_autovacuum before 8.1 is
released, although if it doesn't become integrated by beta feature
freeze, I might give it a shot.

But I hope if anyone completes the linear improvement, they'll post
to the lists.

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 10, 2005, at 9:12 AM, Shelby Cain wrote:

Show quoted text
--- "Thomas F. O'Connell" <tfo@sitening.com> wrote:

Were you sleeping every time through the loop? How about something
like:

if (j%500 == 1) usleep(100000)

Regards,

Shelby Cain

#12Hugo
htakada@gmail.com
In reply to: Thomas F.O'Connell (#11)
hackersgeneral
how to return a result set from a stored procedure

Hi everybody

I am trying to write a stored procedure that returns a result set but it is
not working
this is the function:
///
CREATE OR REPLACE FUNCTION
remisiones.fn_get_total_remitidoxprovision1("numeric")

RETURNS SETOF record AS
$BODY$
begin
select rm.provision as provision,
drm.producto as producto,
sum(drm.cantidad) as cantidad
FROM remisiones.remisiones rm, remisiones.detalles_remision drm
WHERE rm.remision = drm.remision and rm.provision = $1
GROUP BY rm.provision, drm.producto
ORDER BY rm.provision, drm.producto;
end;$BODY$

///
If I call this function from the interactive sql of pgadminIII I get this
result:
select * from fn_gert_total_remitidosxprovision(1)
---------------------------------------------------------------------------
row refcursor
1 <unnamed porta1>

is there a way to display the value of the rows returned, i need it becouse
I need to use it in a Datawindow definition in an Powerbuilder app.

thanks in advance

Hugo

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas F.O'Connell (#11)
hackersgeneral

"Thomas F. O'Connell" <tfo@sitening.com> writes:

Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather
than O(n^2) table activity. At this point, though, I'm probably not
too likely to have much time to hack pg_autovacuum before 8.1 is
released, although if it doesn't become integrated by beta feature
freeze, I might give it a shot.

This would be vastly easier to fix if the code were integrated into the
backend first. In the backend environment you could just keep the info
in a dynahash.c hashtable instead of in a linear list. On the client
side, you have to roll your own hashing (or adapt dynahash to life
outside the backend environment).

regards, tom lane

#14Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#13)
hackersgeneral
Autovacuum in the backend

One goal for 8.1 is to move /contrib/pg_autovacuum in to the backend. I
think it has to be done in four stages:

o move it into the backend and have it start/stop automatically
o move the autovacuum configuration parameters into postgresql.conf
o modify the code to use the backend API for error recovery
o modify the code to use the backend API utilities, like hashes

Who would like to get started on this? It seems pretty straight-forward.

---------------------------------------------------------------------------

Tom Lane wrote:

"Thomas F. O'Connell" <tfo@sitening.com> writes:

Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather
than O(n^2) table activity. At this point, though, I'm probably not
too likely to have much time to hack pg_autovacuum before 8.1 is
released, although if it doesn't become integrated by beta feature
freeze, I might give it a shot.

This would be vastly easier to fix if the code were integrated into the
backend first. In the backend environment you could just keep the info
in a dynahash.c hashtable instead of in a linear list. On the client
side, you have to roll your own hashing (or adapt dynahash to life
outside the backend environment).

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#14)
hackersgeneral
Re: Autovacuum in the backend

Bruce Momjian <pgman@candle.pha.pa.us> writes:

One goal for 8.1 is to move /contrib/pg_autovacuum in to the backend. I
think it has to be done in four stages:

o move it into the backend and have it start/stop automatically
o move the autovacuum configuration parameters into postgresql.conf
o modify the code to use the backend API for error recovery
o modify the code to use the backend API utilities, like hashes

Who would like to get started on this? It seems pretty straight-forward.

A small problem here is that until you get at least to step 3
(backend-standard error handling), none of it is going to be acceptable
to commit. So I don't entirely buy Bruce's notion of bite-size pieces
of work. You can certainly work on it in that fashion, but it's not
going into 8.1 unless most of the above is done by the end of the month.

regards, tom lane

#16Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Tom Lane (#13)
hackersgeneral
Re: Autovacuum in the backend

"Bruce Momjian" <pgman@candle.pha.pa.us> writes

One goal for 8.1 is to move /contrib/pg_autovacuum in to the backend. I
think it has to be done in four stages:

o move it into the backend and have it start/stop automatically

The start/stop routine is quite like Bgwriter. It requires PgStats to be
turned on. If it aborts unexpectedly, hopefully we could restart it. Shall
we have a RequestVacuum() to pass the control to this process so to avoid
possible redundant vacuums from user side?

o move the autovacuum configuration parameters into postgresql.conf

There are some correlations of GUC parameters in order to incorporate it:
* stats_start_collector = true
* stats_row_level = true

add a parameter to let user pass in the configuration parameters:
* autovacuum_command = "-s 100 -S 1 ..."

So if autovacuum_command is given, we will automatically set the upper two
parameters true?

o modify the code to use the backend API for error recovery
o modify the code to use the backend API utilities, like hashes

Change "connect/disconnect server" to "start/stop autovacuum process";
Change "execute query" to "backend APIs";
Change "list" to "hash";
Need think more to handle various error conditions ...

Who would like to get started on this? It seems pretty straight-forward.

I'd like to give it a try.

Regards,
Qingqing

#17Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Bruce Momjian (#14)
hackersgeneral
Re: Autovacuum in the backend

"Tom Lane" <tgl@sss.pgh.pa.us> writes

A small problem here is that until you get at least to step 3
(backend-standard error handling), none of it is going to be acceptable
to commit. So I don't entirely buy Bruce's notion of bite-size pieces
of work. You can certainly work on it in that fashion, but it's not
going into 8.1 unless most of the above is done by the end of the month.

Scared ...

Regards,
Qingqing

#18Neil Conway
neilc@samurai.com
In reply to: Qingqing Zhou (#16)
hackersgeneral
Re: Autovacuum in the backend

Qingqing Zhou wrote:

The start/stop routine is quite like Bgwriter. It requires PgStats to be
turned on.

Wasn't the plan to rewrite pg_autovacuum to use the FSM rather than the
stats collector?

-Neil

#19Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Tom Lane (#13)
hackersgeneral
Re: Autovacuum in the backend

"Neil Conway" <neilc@samurai.com> writes

Wasn't the plan to rewrite pg_autovacuum to use the FSM rather than the
stats collector?

I don't understand. Currently the basic logic of pg_autovacuum is to use the
pg_stat_all_tables numbers like n_tup_upd, n_tup_del to determine if a
relation need to be vacuumed. How to use FSM to get these information?

Regards,
Qingqing

#20Hannu Krosing
hannu@tm.ee
In reply to: Bruce Momjian (#14)
hackersgeneral
Re: Autovacuum in the backend

On T, 2005-06-14 at 21:23 -0400, Bruce Momjian wrote:

One goal for 8.1 is to move /contrib/pg_autovacuum in to the backend. I
think it has to be done in four stages:

o move it into the backend and have it start/stop automatically
o move the autovacuum configuration parameters into postgresql.conf
o modify the code to use the backend API for error recovery
o modify the code to use the backend API utilities, like hashes

Who would like to get started on this? It seems pretty straight-forward.

Can autovacuum yet be configured _not_ to run vacuum during some hours
or above some load ?

Even better - to stop or pause a long-running vacuum if load goes above
some limit.

If it goes into backend before the above is done, it should at least be
possible to switch it off completely.

--
Hannu Krosing <hannu@skype.net>

#21Hannu Krosing
hannu@tm.ee
In reply to: Qingqing Zhou (#19)
hackersgeneral
#22Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Tom Lane (#13)
hackersgeneral
#23Roman Neuhauser
neuhauser@sigpipe.cz
In reply to: Hugo (#12)
hackersgeneral
#24Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#21)
hackersgeneral
#25Bruce Momjian
bruce@momjian.us
In reply to: Qingqing Zhou (#22)
hackersgeneral
#26Alvaro Herrera
alvherre@surnet.cl
In reply to: Qingqing Zhou (#16)
hackersgeneral
#27Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#26)
hackersgeneral
#28Josh Berkus
josh@agliodbs.com
In reply to: Qingqing Zhou (#16)
hackersgeneral
#29Matthew T. O'Connor
matthew@zeut.net
In reply to: Bruce Momjian (#24)
hackersgeneral
#30Matthew T. O'Connor
matthew@zeut.net
In reply to: Bruce Momjian (#25)
hackersgeneral
#31Matthew T. O'Connor
matthew@zeut.net
In reply to: Josh Berkus (#28)
hackersgeneral
#32Matthew T. O'Connor
matthew@zeut.net
In reply to: Hannu Krosing (#20)
hackersgeneral
#33Bruce Momjian
bruce@momjian.us
In reply to: Matthew T. O'Connor (#32)
hackersgeneral
#34Gavin Sherry
swm@linuxworld.com.au
In reply to: Bruce Momjian (#25)
hackersgeneral
#35Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Matthew T. O'Connor (#31)
hackersgeneral
#36Bruce Momjian
bruce@momjian.us
In reply to: Gavin Sherry (#34)
hackersgeneral
#37Alvaro Herrera
alvherre@surnet.cl
In reply to: Matthew T. O'Connor (#31)
hackersgeneral
#38Alvaro Herrera
alvherre@surnet.cl
In reply to: Gavin Sherry (#34)
hackersgeneral
#39Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#37)
hackersgeneral
#40Russell Smith
mr-russ@pws.com.au
In reply to: Alvaro Herrera (#38)
hackersgeneral
#41Josh Berkus
josh@agliodbs.com
In reply to: Gavin Sherry (#34)
hackersgeneral
#42Alvaro Herrera
alvherre@surnet.cl
In reply to: Matthew T. O'Connor (#39)
hackersgeneral
#43Gavin Sherry
swm@linuxworld.com.au
In reply to: Alvaro Herrera (#38)
hackersgeneral
#44Joshua D. Drake
jd@commandprompt.com
In reply to: Russell Smith (#40)
hackersgeneral
#45Josh Berkus
josh@agliodbs.com
In reply to: Alvaro Herrera (#42)
hackersgeneral
#46Josh Berkus
josh@agliodbs.com
In reply to: Joshua D. Drake (#44)
hackersgeneral
#47Neil Conway
neilc@samurai.com
In reply to: Josh Berkus (#41)
hackersgeneral
#48Neil Conway
neilc@samurai.com
In reply to: Alvaro Herrera (#42)
hackersgeneral
#49Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Josh Berkus (#46)
hackersgeneral
#50Alvaro Herrera
alvherre@surnet.cl
In reply to: Neil Conway (#48)
hackersgeneral
#51Matthew T. O'Connor
matthew@zeut.net
In reply to: Joshua D. Drake (#44)
hackersgeneral
#52Matthew T. O'Connor
matthew@zeut.net
In reply to: Gavin Sherry (#43)
hackersgeneral
#53Matthew T. O'Connor
matthew@zeut.net
In reply to: Neil Conway (#47)
hackersgeneral
#54Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#50)
hackersgeneral
#55Matthew T. O'Connor
matthew@zeut.net
In reply to: Neil Conway (#48)
hackersgeneral
#56Rod Taylor
rbt@rbt.ca
In reply to: Matthew T. O'Connor (#51)
hackersgeneral
#57Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#37)
hackersgeneral
#58Gavin Sherry
swm@linuxworld.com.au
In reply to: Matthew T. O'Connor (#52)
hackersgeneral
#59Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Gavin Sherry (#58)
hackersgeneral
#60Magnus Hagander
magnus@hagander.net
In reply to: Hans-Jürgen Schönig (#59)
hackers
#61Gavin Sherry
swm@linuxworld.com.au
In reply to: Hans-Jürgen Schönig (#59)
hackersgeneral
#62Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Gavin Sherry (#61)
hackersgeneral
#63Dave Page
dpage@pgadmin.org
In reply to: Hans-Jürgen Schönig (#62)
hackers
#64Andrew Dunstan
andrew@dunslane.net
In reply to: Gavin Sherry (#61)
hackersgeneral
#65Chris Browne
cbbrowne@acm.org
In reply to: Bruce Momjian (#25)
hackersgeneral
#66Matthew T. O'Connor
matthew@zeut.net
In reply to: Andrew Dunstan (#64)
hackersgeneral
#67Matthew T. O'Connor
matthew@zeut.net
In reply to: Gavin Sherry (#58)
hackersgeneral
#68Matthew T. O'Connor
matthew@zeut.net
In reply to: Hans-Jürgen Schönig (#59)
hackersgeneral
#69Steve Atkins
steve@blighty.com
In reply to: Josh Berkus (#46)
hackersgeneral
#70Andreas Pflug
pgadmin@pse-consulting.de
In reply to: Matthew T. O'Connor (#66)
hackersgeneral
#71Alvaro Herrera
alvherre@surnet.cl
In reply to: Tom Lane (#57)
hackersgeneral
#72Alvaro Herrera
alvherre@surnet.cl
In reply to: Gavin Sherry (#58)
hackersgeneral
#73Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Alvaro Herrera (#72)
hackersgeneral
#74Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#71)
hackersgeneral
#75Dave Page
dpage@pgadmin.org
In reply to: Tom Lane (#74)
hackers
#76Josh Berkus
josh@agliodbs.com
In reply to: Alvaro Herrera (#71)
hackersgeneral
#77Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#76)
hackersgeneral
#78Josh Berkus
josh@agliodbs.com
In reply to: Dave Page (#75)
hackers
#79Doug McNaught
doug@mcnaught.org
In reply to: Josh Berkus (#76)
hackersgeneral
#80Matthew T. O'Connor
matthew@zeut.net
In reply to: Tom Lane (#74)
hackersgeneral
#81Tom Lane
tgl@sss.pgh.pa.us
In reply to: Matthew T. O'Connor (#80)
hackersgeneral
#82Matthew T. O'Connor
matthew@zeut.net
In reply to: Tom Lane (#81)
hackersgeneral
#83Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#74)
hackersgeneral
#84Andreas Pflug
pgadmin@pse-consulting.de
In reply to: Josh Berkus (#78)
hackers
#85Andrew Dunstan
andrew@dunslane.net
In reply to: Andreas Pflug (#84)
hackers
#86Gavin Sherry
swm@linuxworld.com.au
In reply to: Alvaro Herrera (#72)
hackersgeneral
#87Andreas Pflug
pgadmin@pse-consulting.de
In reply to: Andrew Dunstan (#85)
hackers
#88Tim Allen
tim@proximity.com.au
In reply to: Josh Berkus (#76)
hackersgeneral
#89Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Tom Lane (#13)
hackersgeneral
#90Bruce Momjian
bruce@momjian.us
In reply to: Gavin Sherry (#34)
hackersgeneral
#91Thomas F.O'Connell
tfo@sitening.com
In reply to: Gavin Sherry (#86)
hackersgeneral
#92Joshua D. Drake
jd@commandprompt.com
In reply to: Josh Berkus (#46)
hackersgeneral
#93Joshua D. Drake
jd@commandprompt.com
In reply to: Matthew T. O'Connor (#51)
hackersgeneral
#94Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Joshua D. Drake (#93)
hackersgeneral
#95Andreas Pflug
pgadmin@pse-consulting.de
In reply to: Qingqing Zhou (#89)
hackersgeneral
#96Russell Smith
mr-russ@pws.com.au
In reply to: Joshua D. Drake (#93)
hackersgeneral
#97Russell Smith
mr-russ@pws.com.au
In reply to: Bruce Momjian (#90)
hackersgeneral
#98Russell Smith
mr-russ@pws.com.au
In reply to: Matthew T. O'Connor (#67)
hackersgeneral
#99Russell Smith
mr-russ@pws.com.au
In reply to: Andreas Pflug (#95)
hackersgeneral
#100Gavin Sherry
swm@linuxworld.com.au
In reply to: Russell Smith (#97)
hackersgeneral
#101Gavin Sherry
swm@linuxworld.com.au
In reply to: Russell Smith (#98)
hackersgeneral
#102Matthew T. O'Connor
matthew@zeut.net
In reply to: Joshua D. Drake (#92)
hackersgeneral
#103Matthew T. O'Connor
matthew@zeut.net
In reply to: Joshua D. Drake (#93)
hackersgeneral
#104Matthew T. O'Connor
matthew@zeut.net
In reply to: Russell Smith (#96)
hackersgeneral
#105Matthew T. O'Connor
matthew@zeut.net
In reply to: Russell Smith (#99)
hackersgeneral
#106Tom Lane
tgl@sss.pgh.pa.us
In reply to: Matthew T. O'Connor (#105)
hackersgeneral
#107Rod Taylor
rbt@rbt.ca
In reply to: Joshua D. Drake (#93)
hackersgeneral
#108Josh Berkus
josh@agliodbs.com
In reply to: Joshua D. Drake (#93)
hackersgeneral
#109Matthew T. O'Connor
matthew@zeut.net
In reply to: Chris Browne (#65)
hackersgeneral
#110Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Matthew T. O'Connor (#109)
hackersgeneral
#111Bruce Momjian
bruce@momjian.us
In reply to: Gavin Sherry (#34)
hackersgeneral