Re: [pgsql-hackers] Daily digest v1.9418 (15 messages)

Started by Jeff Janesover 16 years ago5 messages
#1Jeff Janes
jeff.janes@gmail.com

---------- Forwarded message ----------
From: "Kevin Grittner" <Kevin.Grittner@wicourts.gov>
To: "Robert Haas" <robertmhaas@gmail.com>, "Bruce Momjian" <
bruce@momjian.us>
Date: Thu, 27 Aug 2009 09:07:05 -0500
Subject: Re: 8.5 release timetable, again
Robert Haas <robertmhaas@gmail.com> wrote:

Maybe we should be looking at an expanded test suite that runs on a
time scale of hours rather than seconds.

if we could say that we had a regression test suite which covered X%
of our code, and it passed on all Y platforms tested, that would
certainly be a confidence booster, especially for large values of X.

Part of the question, of course, is how to build up such a
regression test suite.

Aren't there code coverage monitoring tools that could be run during
regression tests? Sure it would take some time to review the results
and fashion tests to exercise chunks of code which were missed, but at
least we could quantify X and try to make incremental progress on
increasing it....

But the fact that a piece of code was executed doesn't mean
it did the right thing. If it does something subtly wrong,
will we notice?

Jeff

#2Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Jeff Janes (#1)

Jeff Janes <jeff.janes@gmail.com> wrote:

But the fact that a piece of code was executed doesn't mean
it did the right thing. If it does something subtly wrong,
will we notice?

That's why it takes some time to fashion a decent test.

On the other hand, if code is not being exercised at at all during the
beta testing phase, it could do something dramatically wrong and we
wouldn't notice.

-Kevin

#3Jeff Janes
jeff.janes@gmail.com
In reply to: Kevin Grittner (#2)

---------- Forwarded message ----------
From: Tom Lane <tgl@sss.pgh.pa.us>
To: Robert Haas <robertmhaas@gmail.com>
Date: Thu, 27 Aug 2009 10:11:24 -0400
Subject: Re: 8.5 release timetable, again

What I'd like to see is some sort of test mechanism for WAL recovery.
What I've done sometimes in the past (and recently had to fix the tests
to re-enable) is to kill -9 a backend immediately after running the
regression tests, let the system replay the WAL for the tests, and then
take a pg_dump and compare that to the dump gotten after a conventional
run. However this is quite haphazard since (a) the regression tests
aren't especially designed to exercise all of the WAL logic, and (b)
pg_dump might not show the effects of some problems, particularly not
corruption in non-system indexes. It would be worth the trouble to
create a more specific test methodology.

I hacked mdwrite so that it had a static int counter. When the counter hit
400 and if the guc_of_death was set, it would write out a partial block (to
simulate a partial page write) and then PANIC. I have some Perl code that
runs against the database doing a bunch of updates until the database dies.
Then when it can reconnect again it makes sure the data reflects what Perl
thinks it should. This is how I (belatedly) found and traced down the bug
in the visibility bit. (What I was trying to do is determine if my toying
around with XLogInsert was breaking anything. Since the regression suit
wouldn't show me a problem if one existed, I came up with this. Then I
found things were broken even before I started toying with it...)

I don't know how lucky I was to hit open a test that found an already
existing bug. I have to assume I was somewhat lucky, simply because it took
a run of many hours or overnight (with a simulated crash every 2 minutes or
so) to reliably detect the problem. But how do you turn something like this
into a regression test? Scattering the code with intentional crash inducing
code that is there to exercise the error recover parts seems like it would
be quite a mess.

In short: merely making the tests bigger doesn't impress me in the
least. Focused testing on areas we aren't covering at all could be
worth the trouble.

Do you have suggestions on what other areas need it?

Jeff

#4Robert Haas
robertmhaas@gmail.com
In reply to: Jeff Janes (#3)

On Thu, Aug 27, 2009 at 12:47 PM, Jeff Janes<jeff.janes@gmail.com> wrote:

---------- Forwarded message ----------
From: Tom Lane <tgl@sss.pgh.pa.us>
To: Robert Haas <robertmhaas@gmail.com>
Date: Thu, 27 Aug 2009 10:11:24 -0400
Subject: Re: 8.5 release timetable, again

What I'd like to see is some sort of test mechanism for WAL recovery.
What I've done sometimes in the past (and recently had to fix the tests
to re-enable) is to kill -9 a backend immediately after running the
regression tests, let the system replay the WAL for the tests, and then
take a pg_dump and compare that to the dump gotten after a conventional
run.  However this is quite haphazard since (a) the regression tests
aren't especially designed to exercise all of the WAL logic, and (b)
pg_dump might not show the effects of some problems, particularly not
corruption in non-system indexes.  It would be worth the trouble to
create a more specific test methodology.

I hacked mdwrite so that it had a static int counter.  When the counter hit
400 and if the guc_of_death was set, it would write out a partial block (to
simulate a partial page write) and then PANIC.  I have some Perl code that
runs against the database doing a bunch of updates until the database dies.
Then when it can reconnect again it makes sure the data reflects what Perl
thinks it should.  This is how I (belatedly) found and traced down the bug
in the visibility bit.  (What I was trying to do is determine if my toying
around with XLogInsert was breaking anything.  Since the regression suit
wouldn't show me a problem if one existed, I came up with this.  Then I
found things were broken even before I started toying with it...)

I don't know how lucky I was to hit open a test that found an already
existing bug.  I have to assume I was somewhat lucky, simply because it took
a run of many hours or overnight (with a simulated crash every 2 minutes or
so) to reliably detect the problem.  But how do you turn something like this
into a regression test?  Scattering the code with intentional crash inducing
code that is there to exercise the error recover parts seems like it would
be quite a mess.

This is pretty cool, IMO. Admittedly, it does seem hard to bottle it,
but you managed it, so it's not completely impossible. What you could
for this kind of thing is a series of patches and driver scripts, so
you build PostgreSQL with the patch, then run the driver script
against it. Probably we'd want to standardize some kind of framework
for the driver scripts, once we had a list of ideas for testing and
some idea what it should look like.

...Robert

P.S. The subject line of this thread is not ideal.

#5Martijn van Oosterhout
kleptog@svana.org
In reply to: Robert Haas (#4)

On Thu, Aug 27, 2009 at 01:12:20PM -0400, Robert Haas wrote:

This is pretty cool, IMO. Admittedly, it does seem hard to bottle it,
but you managed it, so it's not completely impossible. What you could
for this kind of thing is a series of patches and driver scripts, so
you build PostgreSQL with the patch, then run the driver script
against it. Probably we'd want to standardize some kind of framework
for the driver scripts, once we had a list of ideas for testing and
some idea what it should look like.

Another similar idea I've had in the back of my head for a while is to
setup postgres so it is the only process in a VM. Subsequently after
every single write() syscall, snapshot the filesystem and then run the
recovery process over each one.

It would likely take an unbeleivably long time to run, and maybe
there's some trick to speed it up, but together with code coverage
results it could give you good results as to the reliability of the
recovery process.

Probably more a research project than anything else though.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

Please line up in a tree and maintain the heap invariant while
boarding. Thank you for flying nlogn airlines.