Going for "all green" buildfarm results
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:
firefly HEAD: intermittent failures in the stats test. We seem to have
fixed every other platform back in January, but not this one.
kudu HEAD: one-time failure 6/1/06 in statement_timeout test, never seen
before. Is it possible system was under enough load that the 1-second
timeout fired before control reached the exception block?
tapir HEAD: pilot error, insufficient SysV shmem settings
carp various: carp seems to have *serious* hardware problems, as it
has been failing randomly in all branches for a long time. I suggest
putting that poor machine out to pasture.
penguin 8.0: fails in tsearch2. Previous investigation says that the
failure is unfixable without initdb, which we are not going to force
for 8.0 branch. I suggest retiring penguin from checking 8.0, as
there's not much point in continuing to see a failure there. Or is
it worth improving buildfarm to be able to skip specific tests?
penguin 7.4: fails in initdb, with what seems to be a variant of the
alignment issue that kills tsearch2 in 8.0. We won't fix this either,
so again might as well stop tracking this branch on this machine.
cobra, stoat, sponge 7.4: pilot error. Either install Tk or configure
--without-tk.
firefly 7.4: dblink test fails, with what looks like an rpath problem.
Another one that we fixed awhile ago, and the fix worked on every
platform but this one.
firefly 7.3: trivial regression diffs; we could install variant
comparison files if anyone cared.
cobra, stoat, caribou 7.3: same Tk configuration error as in 7.4 branch
Firefly is obviously the outlier here. I dunno if anyone cares enough
about SCO to spend time investigating it (I don't). Most of the others
just need a little bit of attention from the machine owner.
regards, tom lane
Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:firefly HEAD: intermittent failures in the stats test. We seem to have
fixed every other platform back in January, but not this one.kudu HEAD: one-time failure 6/1/06 in statement_timeout test, never seen
before. Is it possible system was under enough load that the 1-second
timeout fired before control reached the exception block?
[...]
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
[...]
cobra, stoat, sponge 7.4: pilot error. Either install Tk or configure
--without-tk.
sorry for that but the issue with sponge on 7.4 was fixed nearly a week
ago though there have been no changes until today to trigger a new build ;-)
Stefan
Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:firefly HEAD: intermittent failures in the stats test. We seem to
have fixed every other platform back in January, but not this one.firefly 7.4: dblink test fails, with what looks like an rpath problem.
Another one that we fixed awhile ago, and the fix worked on every
platform but this one.firefly 7.3: trivial regression diffs; we could install variant
comparison files if anyone cared.Firefly is obviously the outlier here. I dunno if anyone cares
enough about SCO to spend time investigating it (I don't). Most of
the others just need a little bit of attention from the machine
owner.
If I generate fixes for firefly (I'm the owner), would they have a prayer
Of being applied?
LER
regards, tom lane
---------------------------(end of
broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US
Larry Rosenman said:
Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:firefly HEAD: intermittent failures in the stats test. We seem to
have fixed every other platform back in January, but not this one.firefly 7.4: dblink test fails, with what looks like an rpath problem.
Another one that we fixed awhile ago, and the fix worked on every
platform but this one.firefly 7.3: trivial regression diffs; we could install variant
comparison files if anyone cared.Firefly is obviously the outlier here. I dunno if anyone cares
enough about SCO to spend time investigating it (I don't). Most of
the others just need a little bit of attention from the machine
owner.If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of firefly's
build schedule. That's not carte blanche on fixes, of course - we'd have to
see them.
cheers
andrew
Tom Lane wrote:
Or is
it worth improving buildfarm to be able to skip specific tests?
There is a session on buildfarm improvements scheduled for the Toronto
conference. This is certainly one possibility.
cheers
andrew
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS eleven, unique1, unique2, stringu1
FROM onek WHERE unique1 < 50
ORDER BY unique1 DESC LIMIT 20 OFFSET 39;
! ERROR: could not open relation with OID 27035
AFAICS, the only way to get that error in HEAD is if ScanPgRelation
can't find a pg_class row with the mentioned OID. Presumably 27035
belongs to "onek" or one of its indexes. The very next command also
refers to "onek", and doesn't fail, so what we seem to have here is
a transient lookup failure. We've found a btree bug like that once
before ... wonder if there's still one left?
regards, tom lane
"Andrew Dunstan" <andrew@dunslane.net> writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of firefly's
build schedule. That's not carte blanche on fixes, of course - we'd have to
see them.
What he said ... it'd depend entirely on how ugly the fixes are ;-)
regards, tom lane
Tom Lane wrote:
"Andrew Dunstan" <andrew@dunslane.net> writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
firefly's build schedule. That's not carte blanche on fixes, of
course - we'd have to see them.What he said ... it'd depend entirely on how ugly the fixes are ;-)
Ok, 7.3 is out of firefly's crontab.
I'll look into 7.4.
LER
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS eleven, unique1, unique2, stringu1
FROM onek WHERE unique1 < 50
ORDER BY unique1 DESC LIMIT 20 OFFSET 39;
! ERROR: could not open relation with OID 27035AFAICS, the only way to get that error in HEAD is if ScanPgRelation
can't find a pg_class row with the mentioned OID. Presumably 27035
belongs to "onek" or one of its indexes. The very next command also
refers to "onek", and doesn't fail, so what we seem to have here is
a transient lookup failure. We've found a btree bug like that once
before ... wonder if there's still one left?
If there is still one left it must be quite hard to trigger (using the
regression tests). Like i said before - I tried quite hard to reproduce
the issue back then - without any success.
Stefan
Larry Rosenman wrote:
Tom Lane wrote:
"Andrew Dunstan" <andrew@dunslane.net> writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
firefly's build schedule. That's not carte blanche on fixes, of
course - we'd have to see them.What he said ... it'd depend entirely on how ugly the fixes are ;-)
Ok, 7.3 is out of firefly's crontab.
I'll look into 7.4.
LER
I've taken the cheaters way out for 7.4, and turned off the perl stuff for
now.
as to HEAD, I've played with the system send/recv space parms, and let's see
if
that helps the stats stuff.
LER
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893
Larry Rosenman wrote:
Larry Rosenman wrote:
Tom Lane wrote:
"Andrew Dunstan" <andrew@dunslane.net> writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
firefly's build schedule. That's not carte blanche on fixes, of
course - we'd have to see them.What he said ... it'd depend entirely on how ugly the fixes are ;-)
Ok, 7.3 is out of firefly's crontab.
I'll look into 7.4.
LER
I've taken the cheaters way out for 7.4, and turned off the perl
stuff for now.as to HEAD, I've played with the system send/recv space parms, and
let's see if
that helps the stats stuff.LER
well, the changes didn't help.
I've pulled ALL the cronjobs from firefly.
consider it dead.
Since it is an outlier, it's not useful.
LER
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893
-------- Original Message --------
From: Tom Lane <tgl@sss.pgh.pa.us>kudu HEAD: one-time failure 6/1/06 in statement_timeout test, never seen
before. Is it possible system was under enough load that the 1-second
timeout fired before control reached the exception block?
The load here was no different than any other day. As to whether it's a
real issue or not I have no idea. It is a virtual machine that is subject
to the load on other VMs, but none of them were scheduled to do
anything at the time.
Kris Jurka
Import Notes
Reply to msg id not found: 448229C7.3010900@dunslane.netReference msg id not found: 448229C7.3010900@dunslane.net | Resolved by subject fallback
Larry Rosenman wrote:
well, the changes didn't help.
I've pulled ALL the cronjobs from firefly.
consider it dead.
Since it is an outlier, it's not useful.
OK, I am marking firefly as retired. That means we have no coverage for
Unixware.
cheers
andrew
I can take other if that helps.
Larry, could you help me in the setup?
Regards,
On Thu, 8 Jun 2006, Andrew Dunstan wrote:
Date: Thu, 08 Jun 2006 10:54:09 -0400
From: Andrew Dunstan <andrew@dunslane.net>
Newsgroups: pgsql.hackers
Subject: Re: Going for 'all green' buildfarm resultsLarry Rosenman wrote:
well, the changes didn't help.
I've pulled ALL the cronjobs from firefly.
consider it dead.
Since it is an outlier, it's not useful.
OK, I am marking firefly as retired. That means we have no coverage for
Unixware.cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)
15, Chemin des Monges +33-5-61-50-97-01 (Fax)
31190 AUTERIVE +33-6-07-63-80-64 (GSM)
FRANCE Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)
On Fri, 9 Jun 2006 ohp@pyrenet.fr wrote:
Date: Fri, 9 Jun 2006 11:12:07 +0200
From: ohp@pyrenet.fr
To: Andrew Dunstan <andrew@dunslane.net>, Larry Rosenman <ler@lerctr.org>
Newsgroups: pgsql.hackers
Subject: Re: Going for 'all green' buildfarm resultsI can take other if that helps.
Ooops... takeover :)
Larry, could you help me in the setup?
Regards,
On Thu, 8 Jun 2006, Andrew Dunstan wrote:Date: Thu, 08 Jun 2006 10:54:09 -0400
From: Andrew Dunstan <andrew@dunslane.net>
Newsgroups: pgsql.hackers
Subject: Re: Going for 'all green' buildfarm resultsLarry Rosenman wrote:
well, the changes didn't help.
I've pulled ALL the cronjobs from firefly.
consider it dead.
Since it is an outlier, it's not useful.
OK, I am marking firefly as retired. That means we have no coverage for
Unixware.cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)
15, Chemin des Monges +33-5-61-50-97-01 (Fax)
31190 AUTERIVE +33-6-07-63-80-64 (GSM)
FRANCE Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS eleven, unique1, unique2, stringu1
FROM onek WHERE unique1 < 50
ORDER BY unique1 DESC LIMIT 20 OFFSET 39;
! ERROR: could not open relation with OID 27035AFAICS, the only way to get that error in HEAD is if ScanPgRelation
can't find a pg_class row with the mentioned OID. Presumably 27035
belongs to "onek" or one of its indexes. The very next command also
refers to "onek", and doesn't fail, so what we seem to have here is
a transient lookup failure. We've found a btree bug like that once
before ... wonder if there's still one left?
FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06
Stefan
Stefan Kaltenbrunner wrote:
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS eleven, unique1, unique2, stringu1
FROM onek WHERE unique1 < 50
ORDER BY unique1 DESC LIMIT 20 OFFSET 39;
! ERROR: could not open relation with OID 27035AFAICS, the only way to get that error in HEAD is if ScanPgRelation
can't find a pg_class row with the mentioned OID. Presumably 27035
belongs to "onek" or one of its indexes. The very next command also
refers to "onek", and doesn't fail, so what we seem to have here is
a transient lookup failure. We've found a btree bug like that once
before ... wonder if there's still one left?FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06
The error message this time is
! ERROR: could not open relation with OID 27006
It's worth mentioning that the portals_p2 test, which happens in the
parallel group previous to where this test is run, also accesses the
onek table successfully. It may be interesting to see exactly what
relation is 27006.
The test alter_table, which is on the same parallel group as limit (the
failing test), contains these lines:
ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1;
ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1;
Maybe this is related.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Alvaro Herrera wrote:
Stefan Kaltenbrunner wrote:
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14Weird.
SELECT ''::text AS eleven, unique1, unique2, stringu1
FROM onek WHERE unique1 < 50
ORDER BY unique1 DESC LIMIT 20 OFFSET 39;
! ERROR: could not open relation with OID 27035AFAICS, the only way to get that error in HEAD is if ScanPgRelation
can't find a pg_class row with the mentioned OID. Presumably 27035
belongs to "onek" or one of its indexes. The very next command also
refers to "onek", and doesn't fail, so what we seem to have here is
a transient lookup failure. We've found a btree bug like that once
before ... wonder if there's still one left?FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06
The error message this time is
! ERROR: could not open relation with OID 27006
yeah and before it was:
! ERROR: could not open relation with OID 27035
which looks quite related :-)
It's worth mentioning that the portals_p2 test, which happens in the
parallel group previous to where this test is run, also accesses the
onek table successfully. It may be interesting to see exactly what
relation is 27006.
sorry but i don't have access to the cluster in question any more
(lionfish is quite resource starved and I only enabled to keep failed
builds on -HEAD after the last incident ...)
The test alter_table, which is on the same parallel group as limit (the
failing test), contains these lines:ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1;
ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1;
hmm interesting - lionfish is a slow box(250Mhz MIPS) and particulary
low on memory(48MB+140MB swap) so it is quite likely that the parallel
regress tests are driving it into swap - maybe some sort of subtile
timing issue ?
Stefan
Alvaro Herrera <alvherre@commandprompt.com> writes:
Stefan Kaltenbrunner wrote:
FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06
The test alter_table, which is on the same parallel group as limit (the
failing test), contains these lines:
ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1;
ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1;
I bet Alvaro's spotted the problem. ALTER INDEX RENAME doesn't seem to
take any lock on the index's parent table, only on the index itself.
That means that a query on "onek" could be trying to read the pg_class
entries for onek's indexes concurrently with someone trying to commit
a pg_class update to rename an index. If the query manages to visit
the new and old versions of the row in that order, and the commit
happens between, *neither* of the versions would look valid. MVCC
doesn't save us because this is all SnapshotNow.
Not sure what to do about this. Trying to lock the parent table could
easily be a cure-worse-than-the-disease, because it would create
deadlock risks (we've already locked the index before we could look up
and lock the parent). Thoughts?
The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.
regards, tom lane
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote:
Alvaro Herrera <alvherre@commandprompt.com> writes:
Stefan Kaltenbrunner wrote:
FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06The test alter_table, which is on the same parallel group as limit (the
failing test), contains these lines:
ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1;
ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1;I bet Alvaro's spotted the problem. ALTER INDEX RENAME doesn't seem to
take any lock on the index's parent table, only on the index itself.
That means that a query on "onek" could be trying to read the pg_class
entries for onek's indexes concurrently with someone trying to commit
a pg_class update to rename an index. If the query manages to visit
the new and old versions of the row in that order, and the commit
happens between, *neither* of the versions would look valid. MVCC
doesn't save us because this is all SnapshotNow.Not sure what to do about this. Trying to lock the parent table could
easily be a cure-worse-than-the-disease, because it would create
deadlock risks (we've already locked the index before we could look up
and lock the parent). Thoughts?The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.
It doesn't seem that unusual to want to rename an index on a running
system, and it certainly doesn't seem like the kind of operation that
should pose a problem. So at the very least, we'd need a big fat warning
in the docs about how renaming an index could cause other queries in the
system to fail, and the error message needs to be improved.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Jim C. Nasby wrote:
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote:
Alvaro Herrera <alvherre@commandprompt.com> writes:
Stefan Kaltenbrunner wrote:
FYI: lionfish just managed to hit that problem again:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-07-29%2023:30:06The test alter_table, which is on the same parallel group as limit (the
failing test), contains these lines:
ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1;
ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1;I bet Alvaro's spotted the problem. ALTER INDEX RENAME doesn't seem to
take any lock on the index's parent table, only on the index itself.
That means that a query on "onek" could be trying to read the pg_class
entries for onek's indexes concurrently with someone trying to commit
a pg_class update to rename an index. If the query manages to visit
the new and old versions of the row in that order, and the commit
happens between, *neither* of the versions would look valid. MVCC
doesn't save us because this is all SnapshotNow.Not sure what to do about this. Trying to lock the parent table could
easily be a cure-worse-than-the-disease, because it would create
deadlock risks (we've already locked the index before we could look up
and lock the parent). Thoughts?The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.It doesn't seem that unusual to want to rename an index on a running
system, and it certainly doesn't seem like the kind of operation that
should pose a problem. So at the very least, we'd need a big fat warning
in the docs about how renaming an index could cause other queries in the
system to fail, and the error message needs to be improved.
it is my understanding that Tom is already tackling the underlying issue
on a much more general base ...
Stefan
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
Jim C. Nasby wrote:
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote:
The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.It doesn't seem that unusual to want to rename an index on a running
system, and it certainly doesn't seem like the kind of operation that
should pose a problem. So at the very least, we'd need a big fat warning
in the docs about how renaming an index could cause other queries in the
system to fail, and the error message needs to be improved.
it is my understanding that Tom is already tackling the underlying issue
on a much more general base ...
Done in HEAD, but we might still wish to think about changing the
regression tests in the back branches, else we'll probably continue to
see this failure once in a while ...
regards, tom lane
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
Jim C. Nasby wrote:
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote:
The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.It doesn't seem that unusual to want to rename an index on a running
system, and it certainly doesn't seem like the kind of operation that
should pose a problem. So at the very least, we'd need a big fat warning
in the docs about how renaming an index could cause other queries in the
system to fail, and the error message needs to be improved.it is my understanding that Tom is already tackling the underlying issue
on a much more general base ...Done in HEAD, but we might still wish to think about changing the
regression tests in the back branches, else we'll probably continue to
see this failure once in a while ...
How sure are we that this is the cause of the problem? The feeling I got
was "this is a good guess". If so, do we want to prevent ourselves
getting any further clues in case we're wrong? It's also an interesting
case of a (low likelihood) bug which is not fixable on any stable branch.
cheers
andrew
Andrew Dunstan wrote:
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
Jim C. Nasby wrote:
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote:
The path of least resistance might just be to not run these tests in
parallel. The chance of this issue causing problems in the real world
seems small.It doesn't seem that unusual to want to rename an index on a running
system, and it certainly doesn't seem like the kind of operation that
should pose a problem. So at the very least, we'd need a big fat
warning
in the docs about how renaming an index could cause other queries in
the
system to fail, and the error message needs to be improved.it is my understanding that Tom is already tackling the underlying issue
on a much more general base ...Done in HEAD, but we might still wish to think about changing the
regression tests in the back branches, else we'll probably continue to
see this failure once in a while ...How sure are we that this is the cause of the problem? The feeling I got
was "this is a good guess". If so, do we want to prevent ourselves
getting any further clues in case we're wrong? It's also an interesting
case of a (low likelihood) bug which is not fixable on any stable branch.
well I have a lot of trust into tom - though the main issue is that this
issue seems to be difficult hard to trigger.
afaik only one box (lionfish) ever managed to hit it and even there only
2 times out of several hundred builds - I don't suppose we can come up
with a testcase that might be more reliably showing that issue ?
Stefan
Stefan Kaltenbrunner wrote:
Andrew Dunstan wrote:
How sure are we that this is the cause of the problem? The feeling I got
was "this is a good guess". If so, do we want to prevent ourselves
getting any further clues in case we're wrong? It's also an interesting
case of a (low likelihood) bug which is not fixable on any stable branch.well I have a lot of trust into tom - though the main issue is that this
issue seems to be difficult hard to trigger.
afaik only one box (lionfish) ever managed to hit it and even there only
2 times out of several hundred builds - I don't suppose we can come up
with a testcase that might be more reliably showing that issue ?
Maybe we could write a suitable test case using Martijn's concurrent
testing framework. Or with a pair of custom SQL script running under
pgbench, and a separate process sending random SIGSTOP/SIGCONT to
backends.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Alvaro Herrera <alvherre@commandprompt.com> writes:
Maybe we could write a suitable test case using Martijn's concurrent
testing framework.
The trick is to get process A to commit between the times that process B
looks at the new and old versions of the pg_class row (and it has to
happen to do so in that order ... although that's not a bad bet given
the way btree handles equal keys).
I think the reason we've not tracked this down before is that that's a
pretty small window. You could force the problem by stopping process B
with a debugger breakpoint and then letting A do its thing, but short of
something like that you'll never reproduce it with high probability.
As far as Andrew's question goes: I have no doubt that this race
condition is (or now, was) real and could explain Stefan's failure.
It's not impossible that there's some other problem in there, though.
If so we will still see the problem from time to time on HEAD, and
know that we have more work to do. But I don't think that continuing
to see it on the back branches will teach us anything.
regards, tom lane
Tom Lane wrote:
As far as Andrew's question goes: I have no doubt that this race
condition is (or now, was) real and could explain Stefan's failure.
It's not impossible that there's some other problem in there, though.
If so we will still see the problem from time to time on HEAD, and
know that we have more work to do. But I don't think that continuing
to see it on the back branches will teach us anything.
Fair enough.
cheers
andrew
Tom Lane wrote:
Alvaro Herrera <alvherre@commandprompt.com> writes:
Maybe we could write a suitable test case using Martijn's concurrent
testing framework.The trick is to get process A to commit between the times that process B
looks at the new and old versions of the pg_class row (and it has to
happen to do so in that order ... although that's not a bad bet given
the way btree handles equal keys).I think the reason we've not tracked this down before is that that's a
pretty small window. You could force the problem by stopping process B
with a debugger breakpoint and then letting A do its thing, but short of
something like that you'll never reproduce it with high probability.As far as Andrew's question goes: I have no doubt that this race
condition is (or now, was) real and could explain Stefan's failure.
It's not impossible that there's some other problem in there, though.
If so we will still see the problem from time to time on HEAD, and
know that we have more work to do. But I don't think that continuing
to see it on the back branches will teach us anything.
maybe the following buildfarm report means that we need a new theory :-(
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=sponge&dt=2006-08-16%2021:30:02
Stefan
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
maybe the following buildfarm report means that we need a new theory :-(
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=sponge&dt=2006-08-16%2021:30:02
Vacuum's always had a race condition: it makes a list of rel OIDs and
then tries to vacuum each one. It narrows the window for failure by
doing a SearchSysCacheExists test before relation_open, but there's
still a window for failure.
The rel in question is most likely a temp rel of another backend,
because sanity_check is running by itself and so there shouldn't
be anything else happening except perhaps some other session's
post-disconnect cleanup. Maybe we could put the check for "is
this a temp rel of another relation" into the initial list-making
step instead of waiting till after relation_open. That doesn't
seem to solve the general problem though.
regards, tom lane
Tom Lane wrote:
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
maybe the following buildfarm report means that we need a new theory :-(
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=sponge&dt=2006-08-16%2021:30:02
Vacuum's always had a race condition: it makes a list of rel OIDs and
then tries to vacuum each one. It narrows the window for failure by
doing a SearchSysCacheExists test before relation_open, but there's
still a window for failure.The rel in question is most likely a temp rel of another backend,
because sanity_check is running by itself and so there shouldn't
be anything else happening except perhaps some other session's
post-disconnect cleanup. Maybe we could put the check for "is
this a temp rel of another relation" into the initial list-making
step instead of waiting till after relation_open. That doesn't
seem to solve the general problem though.
hmm yeah - missed the VACUUM; part of the regression diff.
Still this means we will have to live with (rare) failures once in a
while during that test ?
Stefan
Alvaro Herrera <alvherre ( at ) commandprompt ( dot ) com> writes:
Maybe we could write a suitable test case using Martijn's concurrent
testing framework.The trick is to get process A to commit between the times that process B
looks at the new and old versions of the pg_class row (and it has to
happen to do so in that order ... although that's not a bad bet given
the way btree handles equal keys).I think the reason we've not tracked this down before is that that's a
pretty small window. You could force the problem by stopping process B
with a debugger breakpoint and then letting A do its thing, but short of
something like that you'll never reproduce it with high probability.
Actually I was already looking into a related issue and have some work here
that may help with this.
I wanted to test the online index build and to do that I figured you needed to
have regression tests like the ones we have now except with multiple database
sessions. So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked or
slow transaction is still running and issue queries in other transactions.
I thought it was a proof-of-concept kludge but actually it's worked out quite
well. There were a few conceptual gotchas but I think I have a reasonable
solution for each.
The main issue was that any time you issue an asynchronously connection that
you expect to block you have a race condition in the test. You can't switch
connections and proceed right away or you may actually proceed with the other
connection before the first connection's command is received and acted on by
the backend.
The "right" solution to this would involve altering the backend and the
protocol to provide some form of feedback when an asynchronous query had
reached various states including when it was blocked. You would have to
annotate it with enough information that the client can determine it's
actually blocked on the right thing and not just on some uninteresting
transient lock too.
Instead I just added a command to cause psql to wait for a time. This is
nearly as good since all the regression tests run fairly quickly so if you
wait even a fraction of a second you can be pretty certain the command has
been received and if it were not going to block it would have finished and
printed output already. And it was *much* simpler.
Also, I think for interactive use we would want a somewhat more sophisticated
scheduling of output. It would be nice to print out results as they come in
even if we're on another connection. For the regression tests you certainly do
not want that since that would introduce unavoidable non-deterministic race
conditions in your output files all over the place. The way I've coded it now
takes care to print out output only from the "active" database connection and
the test cases need to be written to switch connections at each point they
want to test for possibly incorrect output.
Another issue was that I couldn't come up with a nice set of names for the
commands that didn't conflict with the myriad of one-letter commands already
in psql. So I just prefixed the all with "c" (connection). I figured when I
submitted it I would just let the community hash out the names and take the 2s
it would take to change them.
The test cases are actually super easy to write and read, at least considering
we're talking about concurrent sql sessions here. I think it's far clearer
than trying to handle separate scripts and nearly as clear as Martin's
proposal from a while back to prepend a connection number on every line.
The commands I've added or altered are:
\c[onnect][&] [DBNAME|- USER|- HOST|- PORT|-]
connect to new database (currently "postgres")
if optional & is present open do not close existing connection
\cswitch n
switch to database connection n
\clist
list database connections
\cdisconnect
close current database connection
use \cswitch or \connect to select another connection
\cnowait
issue next query without waiting for results
\cwait [n]
if any queries are pending wait n seconds for results
Also I added %& to the psql prompt format to indicate the current connection.
So the tests look like, for example:
postgres=# \c&
[2]: Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# begin;
BEGIN
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# create table foo (a integer);
CREATE TABLE
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cswitch 1
[1]: You are now connected to database "postgres"
postgres[1]You are now connected to database "postgres"=# select * from foo;
ERROR: relation "foo" does not exist
postgres[1]You are now connected to database "postgres"=# \cswitch 2
[2]: Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# commit;
COMMIT
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cswitch 1
[1]: You are now connected to database "postgres"
postgres[1]You are now connected to database "postgres"=# select * from foo;
a
---
(0 rows)
postgres[1]You are now connected to database "postgres"=# insert into foo values (1);
INSERT 0 1
postgres[1]You are now connected to database "postgres"=# begin;
BEGIN
postgres[1]You are now connected to database "postgres"=# update foo set a = 2;
UPDATE 1
postgres[1]You are now connected to database "postgres"=# \cswitch 2
[2]: Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# select * from foo;
a
---
1
(1 row)
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cnowait
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# update foo set a = 3;
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cwait .1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cswitch 1
[1]: You are now connected to database "postgres"
postgres[1]You are now connected to database "postgres"=# commit;
COMMIT
postgres[1]You are now connected to database "postgres"=# \cswitch 2
[2]: Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1
UPDATE 1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \clist
[1]: You are now connected to database "postgres"
[2]: Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1
postgres[2]Connected to database "postgres" postgres[2]=# \cdisconnect Disconnecting from database (use \connect to reconnect or \cswitch to select another connection) !> \cswitch 1=# \cdisconnect
Disconnecting from database (use \connect to reconnect or \cswitch to select another connection)
!> \cswitch 1
[1]: You are now connected to database "postgres"
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Import Notes
Resolved by subject fallback
stark wrote:
Actually I was already looking into a related issue and have some work here
that may help with this.I wanted to test the online index build and to do that I figured you needed to
have regression tests like the ones we have now except with multiple database
sessions. So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked or
slow transaction is still running and issue queries in other transactions.I thought it was a proof-of-concept kludge but actually it's worked out quite
well. There were a few conceptual gotchas but I think I have a reasonable
solution for each.
I have had an idea for some time that is actually much simpler -- just
launch several backends at once to do different things, and randomly
send SIGSTOP and SIGCONT to each. If they keep doing whatever they are
doing in infinite loops, and you leave it enough time, it's very likely
that you'll get problems if the concurrent locking (or whatever) is not
right.
The nice thing about this is that it's completely random, i.e. you don't
have to introduce individual stop points in the backend (which may
themselves hide some bugs). It acts (or at least, I expect it to act)
just like the kernel gave execution to another process.
The main difference with your approach is that I haven't tried it.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Thu, Aug 17, 2006 at 04:17:01PM +0100, stark wrote:
I wanted to test the online index build and to do that I figured you needed to
have regression tests like the ones we have now except with multiple database
sessions. So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked or
slow transaction is still running and issue queries in other transactions.
Wow, that's damn cool! FWIW, one thing I can think of that would be
useful is the ability to 'background' a long-running query. I see
\cnowait, but having something like & from unix shells would be even
easier. It'd also be great to have the equivalent of ^Z so that if you
got tired of waiting on a query, you could get back to the psql prompt
without killing it.
Also, I think for interactive use we would want a somewhat more sophisticated
scheduling of output. It would be nice to print out results as they come in
even if we're on another connection. For the regression tests you certainly do
not want that since that would introduce unavoidable non-deterministic race
conditions in your output files all over the place. The way I've coded it now
takes care to print out output only from the "active" database connection and
the test cases need to be written to switch connections at each point they
want to test for possibly incorrect output.
Thinking in terms of tcsh & co, there's a number of ways to handle this:
1) Output happens real-time
2) Only output from current connection (what you've done)
3) Only output after user input (ie: code that handles output is only
run after the user has entered a command). I think most shells
operate this way by default.
4) Provide an indication that output has come in from a background
connection, but don't provide the actual output. This could be
combined with #3.
#3 is nice because you won't get interrupted in the middle of entering
some long query. #4 could be useful for automated testing, especially if
the indicator was routed to another output channel, such as STDERR.
Another issue was that I couldn't come up with a nice set of names for the
commands that didn't conflict with the myriad of one-letter commands already
in psql. So I just prefixed the all with "c" (connection). I figured when I
submitted it I would just let the community hash out the names and take the 2s
it would take to change them.The test cases are actually super easy to write and read, at least considering
we're talking about concurrent sql sessions here. I think it's far clearer
than trying to handle separate scripts and nearly as clear as Martin's
proposal from a while back to prepend a connection number on every line.The commands I've added or altered are:
\c[onnect][&] [DBNAME|- USER|- HOST|- PORT|-]
connect to new database (currently "postgres")
if optional & is present open do not close existing connection
\cswitch n
switch to database connection n
I can see \1 - \9 as being a handy shortcut.
\clist
list database connections
\cdisconnect
close current database connection
use \cswitch or \connect to select another connection
Would ^d have the same effect?
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Thu, Aug 17, 2006 at 03:09:30PM -0400, Alvaro Herrera wrote:
stark wrote:
Actually I was already looking into a related issue and have some work here
that may help with this.I wanted to test the online index build and to do that I figured you needed to
have regression tests like the ones we have now except with multiple database
sessions. So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked or
slow transaction is still running and issue queries in other transactions.I thought it was a proof-of-concept kludge but actually it's worked out quite
well. There were a few conceptual gotchas but I think I have a reasonable
solution for each.I have had an idea for some time that is actually much simpler -- just
launch several backends at once to do different things, and randomly
send SIGSTOP and SIGCONT to each. If they keep doing whatever they are
doing in infinite loops, and you leave it enough time, it's very likely
that you'll get problems if the concurrent locking (or whatever) is not
right.
This is probably worth doing as well, since it would simulate what an
IO-bound system would look like.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
"Jim C. Nasby" <jnasby@pervasive.com> writes:
On Thu, Aug 17, 2006 at 03:09:30PM -0400, Alvaro Herrera wrote:
I have had an idea for some time that is actually much simpler -- just
launch several backends at once to do different things, and randomly
send SIGSTOP and SIGCONT to each. If they keep doing whatever they are
doing in infinite loops, and you leave it enough time, it's very likely
that you'll get problems if the concurrent locking (or whatever) is not
right.
This is probably worth doing as well, since it would simulate what an
IO-bound system would look like.
While that might be useful for testing, it'd absolutely suck for
debugging, because of the difficulty of reproducing a problem :-(
regards, tom lane
Am Donnerstag, 17. August 2006 17:17 schrieb stark:
Instead I just added a command to cause psql to wait for a time.
Do we need the full multiple-connection handling command set, or would
asynchronous query support and a wait command be enough?
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
On Fri, Aug 18, 2006 at 02:46:39PM +0200, Peter Eisentraut wrote:
Am Donnerstag, 17. August 2006 17:17 schrieb stark:
Instead I just added a command to cause psql to wait for a time.
Do we need the full multiple-connection handling command set, or would
asynchronous query support and a wait command be enough?
I am interested in this too. For example the tool I posted a while ago
supported only this. It controlled multiple connections and only
supported sending async & wait.
It is enough to support fairly deterministic scenarios, for example,
testing if the locks block on eachother as documented. However, it
works less well for non-deterministic testing. Yet, a test-suite has to
be deterministic, right?
From a client side, is there any testing method better than async and
wait? I've wondered about a tool that attached to the backend with gdb
and for testing killed the backend when it hit a particular function.
By selecting different functions each time, once you'd covered a lot of
functions and tested recovery, you could have a good idea if the
recovery code works properly.
Has anyone seens a tool like that?
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
From each according to his ability. To each according to his ability to litigate.
stark wrote:
Alvaro Herrera <alvherre ( at ) commandprompt ( dot ) com> writes:
Maybe we could write a suitable test case using Martijn's concurrent
testing framework.The trick is to get process A to commit between the times that process B
looks at the new and old versions of the pg_class row (and it has to
happen to do so in that order ... although that's not a bad bet given
the way btree handles equal keys).I think the reason we've not tracked this down before is that that's a
pretty small window. You could force the problem by stopping process B
with a debugger breakpoint and then letting A do its thing, but short of
something like that you'll never reproduce it with high probability.Actually I was already looking into a related issue and have some work here
that may help with this.I wanted to test the online index build and to do that I figured you needed to
have regression tests like the ones we have now except with multiple database
sessions. So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked or
slow transaction is still running and issue queries in other transactions.I thought it was a proof-of-concept kludge but actually it's worked out quite
well. There were a few conceptual gotchas but I think I have a reasonable
solution for each.
[snip]
Can you please put the patch up somewhere so people can see what's involved?
thanks
cheers
andrew
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
Tom Lane wrote:
Vacuum's always had a race condition: it makes a list of rel OIDs and
then tries to vacuum each one. It narrows the window for failure by
doing a SearchSysCacheExists test before relation_open, but there's
still a window for failure.
hmm yeah - missed the VACUUM; part of the regression diff.
Still this means we will have to live with (rare) failures once in a
while during that test ?
I thought of what seems a pretty simple solution for this: make VACUUM
lock the relation before doing the SearchSysCacheExists, ie instead
of the existing code
if (!SearchSysCacheExists(RELOID,
ObjectIdGetDatum(relid),
0, 0, 0))
// give up
lmode = vacstmt->full ? AccessExclusiveLock : ShareUpdateExclusiveLock;
onerel = relation_open(relid, lmode);
do
lmode = vacstmt->full ? AccessExclusiveLock : ShareUpdateExclusiveLock;
LockRelationOid(relid, lmode);
if (!SearchSysCacheExists(RELOID,
ObjectIdGetDatum(relid),
0, 0, 0))
// give up
onerel = relation_open(relid, NoLock);
Once we're holding lock, we can be sure there's not a DROP TABLE in
progress, so there's no race condition anymore. It's OK to take a
lock on the OID of a relation that no longer exists, AFAICS; we'll
just drop it again immediately (the "give up" path includes transaction
exit, so there's not even any extra code needed).
This wasn't possible before the recent adjustments to the relation
locking protocol, but now it looks trivial ... am I missing anything?
Perhaps it is worth folding this test into a "conditional_relation_open"
function that returns NULL instead of failing if the rel no longer
exists. I think there are potential uses in CLUSTER and perhaps REINDEX
as well as VACUUM.
regards, tom lane
Andrew Dunstan <andrew@dunslane.net> writes:
stark wrote:
So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked
or slow transaction is still running and issue queries in other
transactions.[snip]
Can you please put the patch up somewhere so people can see what's involved?
I'll send it to pgsql-patches.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Andrew Dunstan <andrew@dunslane.net> writes:
stark wrote:
So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked
or slow transaction is still running and issue queries in other
transactions.[snip]
Can you please put the patch up somewhere so people can see what's involved?
As promised:
Attachments:
concurrent-psql-patch.2application/octet-streamDownload
Index: src/bin/psql/command.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/command.c,v
retrieving revision 1.171
diff -c -r1.171 command.c
*** src/bin/psql/command.c 18 Jul 2006 17:42:01 -0000 1.171
--- src/bin/psql/command.c 15 Aug 2006 11:37:38 -0000
***************
*** 19,24 ****
--- 19,26 ----
#ifndef WIN32
#include <sys/types.h> /* for umask() */
#include <sys/stat.h> /* for stat() */
+ #include <sys/time.h> /* for gettimeofday */
+ #include <time.h>
#include <fcntl.h> /* open() flags */
#include <unistd.h> /* for geteuid(), getpid(), stat() */
#else
***************
*** 48,59 ****
#include "mb/pg_wchar.h"
/* functions for use in this file */
static backslashResult exec_command(const char *cmd,
PsqlScanState scan_state,
PQExpBuffer query_buf);
static bool do_edit(const char *filename_arg, PQExpBuffer query_buf);
! static bool do_connect(char *dbname, char *user, char *host, char *port);
static bool do_shell(const char *command);
static void SyncVerbosityVariable(void);
--- 50,64 ----
#include "mb/pg_wchar.h"
+ /* XXX for PGASYNC_IDLE */
+ #include "libpq-int.h"
+
/* functions for use in this file */
static backslashResult exec_command(const char *cmd,
PsqlScanState scan_state,
PQExpBuffer query_buf);
static bool do_edit(const char *filename_arg, PQExpBuffer query_buf);
! static bool do_connect(char *dbname, char *user, char *host, char *port, bool new);
static bool do_shell(const char *command);
static void SyncVerbosityVariable(void);
***************
*** 238,262 ****
* \c dbs Connect to "dbs" database on current port of current
* host as current user.
*/
! else if (strcmp(cmd, "c") == 0 || strcmp(cmd, "connect") == 0)
{
! char *opt1,
! *opt2,
! *opt3,
! *opt4;
opt1 = read_connect_arg(scan_state);
opt2 = read_connect_arg(scan_state);
opt3 = read_connect_arg(scan_state);
opt4 = read_connect_arg(scan_state);
! success = do_connect(opt1, opt2, opt3, opt4);
free(opt1);
free(opt2);
free(opt3);
free(opt4);
}
/* \cd */
else if (strcmp(cmd, "cd") == 0)
--- 243,354 ----
* \c dbs Connect to "dbs" database on current port of current
* host as current user.
*/
! else if (strcmp(cmd, "c") == 0 ||
! strcmp(cmd, "c&") == 0 ||
! strcmp(cmd, "connect") == 0 ||
! strcmp(cmd, "connect&") == 0)
{
! char *opt1, *opt2, *opt3, *opt4;
! bool new;
opt1 = read_connect_arg(scan_state);
opt2 = read_connect_arg(scan_state);
opt3 = read_connect_arg(scan_state);
opt4 = read_connect_arg(scan_state);
+ new = cmd[strlen(cmd)-1] == '&';
! success = do_connect(opt1, opt2, opt3, opt4, new);
free(opt1);
free(opt2);
free(opt3);
free(opt4);
}
+ else if (strcmp(cmd, "cswitch") == 0)
+ {
+
+ char *opt1;
+ unsigned slot;
+
+ opt1 = psql_scan_slash_option(scan_state, OT_NORMAL, NULL, true);
+
+ if (!opt1) {
+ psql_error("\\%s: missing required argument\n", cmd);
+ success = false;
+ }
+
+ if (success) {
+ slot = atoi(opt1);
+ if (slot <= 0 || slot > MAX_CONNECTIONS) {
+ psql_error("\\%s: unrecognized connection number, %s\n", cmd, opt1);
+ success = false;
+ }
+ }
+
+ if (success) {
+ if (!cset[slot-1].db) {
+ psql_error("\\%s: connection number %d is not active\n", cmd, slot);
+ success = false;
+ }
+ }
+
+ if (success) {
+ pset.c = &cset[slot-1];
+ printf(_("[%d] You are now connected to database \"%s\"\n"), slot, PQdb(pset.c->db));
+ }
+ }
+
+ /* \cnowait */
+ else if (strcmp(cmd, "cnowait") == 0)
+ {
+ pset.nowait = true;
+ }
+
+ /* \cwait [n] */
+ else if (strcmp(cmd, "cwait") == 0)
+ {
+ char * opt1 = NULL;
+ double seconds = 0.0;
+ unsigned long msecs = 0;
+ TimevalStruct start, now;
+
+
+ opt1 = psql_scan_slash_option(scan_state, OT_NORMAL, NULL, true);
+ if (opt1) {
+ seconds = strtod(opt1, NULL);
+ }
+ if (seconds) {
+ msecs = seconds * 1000;
+ }
+
+ GETTIMEOFDAY(&start);
+ do {
+ if (pset.c->db->asyncStatus != PGASYNC_BUSY)
+ break;
+ if (CheckQueryResults()) {
+ ReadQueryResults();
+ break;
+ }
+ GETTIMEOFDAY(&now);
+ pg_usleep(10000);
+ } while (DIFF_MSEC(&now, &start) < msecs);
+ }
+
+ /* \clist */
+ else if (strcmp(cmd, "clist") == 0)
+ {
+ unsigned i;
+ for (i=0;i<MAX_CONNECTIONS; i++)
+ if (cset[i].db)
+ printf(_("[%d] Connected to database \"%s\"\n"), i+1, PQdb(cset[i].db));
+ }
+
+ else if (strcmp(cmd, "cdisconnect") == 0)
+ {
+ printf(_("Disconnecting from database (use \\connect to reconnect or \\cswitch to select another connection)\n"));
+ PQfinish(pset.c->db);
+ pset.c->db = NULL;
+ }
/* \cd */
else if (strcmp(cmd, "cd") == 0)
***************
*** 468,487 ****
if (!encoding)
{
/* show encoding */
! puts(pg_encoding_to_char(pset.encoding));
}
else
{
/* set encoding */
! if (PQsetClientEncoding(pset.db, encoding) == -1)
psql_error("%s: invalid encoding name or conversion procedure not found\n", encoding);
else
{
/* save encoding info into psql internal data */
! pset.encoding = PQclientEncoding(pset.db);
! pset.popt.topt.encoding = pset.encoding;
SetVariable(pset.vars, "ENCODING",
! pg_encoding_to_char(pset.encoding));
}
free(encoding);
}
--- 560,579 ----
if (!encoding)
{
/* show encoding */
! puts(pg_encoding_to_char(pset.c->encoding));
}
else
{
/* set encoding */
! if (PQsetClientEncoding(pset.c->db, encoding) == -1)
psql_error("%s: invalid encoding name or conversion procedure not found\n", encoding);
else
{
/* save encoding info into psql internal data */
! pset.c->encoding = PQclientEncoding(pset.c->db);
! pset.popt.topt.encoding = pset.c->encoding;
SetVariable(pset.vars, "ENCODING",
! pg_encoding_to_char(pset.c->encoding));
}
free(encoding);
}
***************
*** 666,672 ****
if (opt0)
user = opt0;
else
! user = PQuser(pset.db);
encrypted_password = PQencryptPassword(pw1, user);
--- 758,764 ----
if (opt0)
user = opt0;
else
! user = PQuser(pset.c->db);
encrypted_password = PQencryptPassword(pw1, user);
***************
*** 683,689 ****
initPQExpBuffer(&buf);
printfPQExpBuffer(&buf, "ALTER USER %s PASSWORD ",
fmtId(user));
! appendStringLiteralConn(&buf, encrypted_password, pset.db);
res = PSQLexec(buf.data, false);
termPQExpBuffer(&buf);
if (!res)
--- 775,781 ----
initPQExpBuffer(&buf);
printfPQExpBuffer(&buf, "ALTER USER %s PASSWORD ",
fmtId(user));
! appendStringLiteralConn(&buf, encrypted_password, pset.c->db);
res = PSQLexec(buf.data, false);
termPQExpBuffer(&buf);
if (!res)
***************
*** 1020,1028 ****
* the old connection will be kept.
*/
static bool
! do_connect(char *dbname, char *user, char *host, char *port)
{
! PGconn *o_conn = pset.db,
*n_conn;
char *password = NULL;
--- 1112,1120 ----
* the old connection will be kept.
*/
static bool
! do_connect(char *dbname, char *user, char *host, char *port, bool new)
{
! PGconn *o_conn = pset.c->db,
*n_conn;
char *password = NULL;
***************
*** 1061,1068 ****
dbname, user, password);
/* We can immediately discard the password -- no longer needed */
! if (password)
free(password);
if (PQstatus(n_conn) == CONNECTION_OK)
break;
--- 1153,1163 ----
dbname, user, password);
/* We can immediately discard the password -- no longer needed */
! if (password) {
! memset(password, '\0', strlen(password));
free(password);
+ }
+
if (PQstatus(n_conn) == CONNECTION_OK)
break;
***************
*** 1087,1093 ****
{
psql_error("%s", PQerrorMessage(n_conn));
! /* pset.db is left unmodified */
if (o_conn)
fputs(_("Previous connection kept.\n"), stderr);
}
--- 1182,1188 ----
{
psql_error("%s", PQerrorMessage(n_conn));
! /* pset.c->db is left unmodified */
if (o_conn)
fputs(_("Previous connection kept.\n"), stderr);
}
***************
*** 1097,1103 ****
if (o_conn)
{
PQfinish(o_conn);
! pset.db = NULL;
}
}
--- 1192,1198 ----
if (o_conn)
{
PQfinish(o_conn);
! pset.c->db = NULL;
}
}
***************
*** 1105,1136 ****
return false;
}
! /*
! * Replace the old connection with the new one, and update
! * connection-dependent variables.
! */
PQsetNoticeProcessor(n_conn, NoticeProcessor, NULL);
- pset.db = n_conn;
SyncVariables();
/* Tell the user about the new connection */
if (!QUIET())
{
! printf(_("You are now connected to database \"%s\""), PQdb(pset.db));
! if (param_is_newly_set(PQuser(o_conn), PQuser(pset.db)))
! printf(_(" as user \"%s\""), PQuser(pset.db));
! if (param_is_newly_set(PQhost(o_conn), PQhost(pset.db)))
! printf(_(" on host \"%s\""), PQhost(pset.db));
! if (param_is_newly_set(PQport(o_conn), PQport(pset.db)))
! printf(_(" at port \"%s\""), PQport(pset.db));
printf(".\n");
}
! if (o_conn)
PQfinish(o_conn);
return true;
}
--- 1200,1250 ----
return false;
}
!
! if (new) {
! int i, newslot = 0;
! for (i=MAX_CONNECTIONS-1; i>=0; i--)
! if (cset[i].db)
! break;
! else
! newslot = i+1;
! if (!newslot) {
! psql_error("maximum number of connections already in use\n");
! /* XXX clean up new connection -- maybe move this earlier */
! return false;
! }
!
! pset.c = &cset[newslot-1];
! pset.c->slot = newslot;
! }
!
!
! pset.c->db = n_conn;
!
PQsetNoticeProcessor(n_conn, NoticeProcessor, NULL);
SyncVariables();
/* Tell the user about the new connection */
if (!QUIET())
{
! if (new)
! printf("[%d] ", pset.c->slot);
!
! printf(_("You are now connected to database \"%s\""), PQdb(pset.c->db));
! if (param_is_newly_set(PQuser(o_conn), PQuser(pset.c->db)))
! printf(_(" as user \"%s\""), PQuser(pset.c->db));
! if (param_is_newly_set(PQhost(o_conn), PQhost(pset.c->db)))
! printf(_(" on host \"%s\""), PQhost(pset.c->db));
! if (param_is_newly_set(PQport(o_conn), PQport(pset.c->db)))
! printf(_(" at port \"%s\""), PQport(pset.c->db));
printf(".\n");
}
! if (o_conn && !new)
PQfinish(o_conn);
return true;
}
***************
*** 1146,1159 ****
SyncVariables(void)
{
/* get stuff from connection */
! pset.encoding = PQclientEncoding(pset.db);
! pset.popt.topt.encoding = pset.encoding;
! SetVariable(pset.vars, "DBNAME", PQdb(pset.db));
! SetVariable(pset.vars, "USER", PQuser(pset.db));
! SetVariable(pset.vars, "HOST", PQhost(pset.db));
! SetVariable(pset.vars, "PORT", PQport(pset.db));
! SetVariable(pset.vars, "ENCODING", pg_encoding_to_char(pset.encoding));
/* send stuff to it, too */
SyncVerbosityVariable();
--- 1260,1275 ----
SyncVariables(void)
{
/* get stuff from connection */
! pset.c->encoding = PQclientEncoding(pset.c->db);
! pset.popt.topt.encoding = pset.c->encoding;
! SetVariable(pset.vars, "DBNAME", PQdb(pset.c->db));
! SetVariable(pset.vars, "USER", PQuser(pset.c->db));
! SetVariable(pset.vars, "HOST", PQhost(pset.c->db));
! SetVariable(pset.vars, "PORT", PQport(pset.c->db));
! SetVariable(pset.vars, "ENCODING", pg_encoding_to_char(pset.c->encoding));
! /* Grab the backend server version */
! pset.c->sversion = PQserverVersion(pset.c->db);
/* send stuff to it, too */
SyncVerbosityVariable();
***************
*** 1197,1203 ****
break;
}
! PQsetErrorVerbosity(pset.db, pset.verbosity);
}
--- 1313,1319 ----
break;
}
! PQsetErrorVerbosity(pset.c->db, pset.verbosity);
}
Index: src/bin/psql/common.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/common.c,v
retrieving revision 1.122
diff -c -r1.122 common.c
*** src/bin/psql/common.c 14 Jul 2006 14:52:26 -0000 1.122
--- src/bin/psql/common.c 15 Aug 2006 11:37:38 -0000
***************
*** 30,58 ****
#include "mb/pg_wchar.h"
- /* Workarounds for Windows */
- /* Probably to be moved up the source tree in the future, perhaps to be replaced by
- * more specific checks like configure-style HAVE_GETTIMEOFDAY macros.
- */
- #ifndef WIN32
-
- typedef struct timeval TimevalStruct;
-
- #define GETTIMEOFDAY(T) gettimeofday(T, NULL)
- #define DIFF_MSEC(T, U) \
- ((((int) ((T)->tv_sec - (U)->tv_sec)) * 1000000.0 + \
- ((int) ((T)->tv_usec - (U)->tv_usec))) / 1000.0)
- #else
-
- typedef struct _timeb TimevalStruct;
-
- #define GETTIMEOFDAY(T) _ftime(T)
- #define DIFF_MSEC(T, U) \
- (((T)->time - (U)->time) * 1000.0 + \
- ((T)->millitm - (U)->millitm))
- #endif
-
-
static bool command_no_begin(const char *query);
/*
--- 30,35 ----
***************
*** 340,346 ****
static bool
ConnectionUp(void)
{
! return PQstatus(pset.db) != CONNECTION_BAD;
}
--- 317,323 ----
static bool
ConnectionUp(void)
{
! return PQstatus(pset.c->db) != CONNECTION_BAD;
}
***************
*** 370,382 ****
}
fputs(_("The connection to the server was lost. Attempting reset: "), stderr);
! PQreset(pset.db);
OK = ConnectionUp();
if (!OK)
{
fputs(_("Failed.\n"), stderr);
! PQfinish(pset.db);
! pset.db = NULL;
ResetCancelConn();
UnsyncVariables();
}
--- 347,359 ----
}
fputs(_("The connection to the server was lost. Attempting reset: "), stderr);
! PQreset(pset.c->db);
OK = ConnectionUp();
if (!OK)
{
fputs(_("Failed.\n"), stderr);
! PQfinish(pset.c->db);
! pset.c->db = NULL;
ResetCancelConn();
UnsyncVariables();
}
***************
*** 411,417 ****
if (oldCancelConn != NULL)
PQfreeCancel(oldCancelConn);
! cancelConn = PQgetCancel(pset.db);
#ifdef WIN32
LeaveCriticalSection(&cancelConnLock);
--- 388,394 ----
if (oldCancelConn != NULL)
PQfreeCancel(oldCancelConn);
! cancelConn = PQgetCancel(pset.c->db);
#ifdef WIN32
LeaveCriticalSection(&cancelConnLock);
***************
*** 456,462 ****
* Returns true for valid result, false for error state.
*/
static bool
! AcceptResult(const PGresult *result, const char *query)
{
bool OK = true;
--- 433,439 ----
* Returns true for valid result, false for error state.
*/
static bool
! AcceptResult(const PGresult *result)
{
bool OK = true;
***************
*** 482,488 ****
if (!OK)
{
! const char *error = PQerrorMessage(pset.db);
if (strlen(error))
psql_error("%s", error);
--- 459,465 ----
if (!OK)
{
! const char *error = PQerrorMessage(pset.c->db);
if (strlen(error))
psql_error("%s", error);
***************
*** 517,523 ****
PGresult *res;
int echo_hidden;
! if (!pset.db)
{
psql_error("You are currently not connected to a database.\n");
return NULL;
--- 494,500 ----
PGresult *res;
int echo_hidden;
! if (!pset.c->db)
{
psql_error("You are currently not connected to a database.\n");
return NULL;
***************
*** 545,557 ****
SetCancelConn();
! if (start_xact && PQtransactionStatus(pset.db) == PQTRANS_IDLE &&
!GetVariableBool(pset.vars, "AUTOCOMMIT"))
{
! res = PQexec(pset.db, "BEGIN");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.db));
PQclear(res);
ResetCancelConn();
return NULL;
--- 522,534 ----
SetCancelConn();
! if (start_xact && PQtransactionStatus(pset.c->db) == PQTRANS_IDLE &&
!GetVariableBool(pset.vars, "AUTOCOMMIT"))
{
! res = PQexec(pset.c->db, "BEGIN");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.c->db));
PQclear(res);
ResetCancelConn();
return NULL;
***************
*** 559,567 ****
PQclear(res);
}
! res = PQexec(pset.db, query);
! if (!AcceptResult(res, query) && res)
{
PQclear(res);
res = NULL;
--- 536,544 ----
PQclear(res);
}
! res = PQexec(pset.c->db, query);
! if (!AcceptResult(res) && res)
{
PQclear(res);
res = NULL;
***************
*** 580,586 ****
{
PGnotify *notify;
! while ((notify = PQnotifies(pset.db)))
{
fprintf(pset.queryFout, _("Asynchronous notification \"%s\" received from server process with PID %d.\n"),
notify->relname, notify->be_pid);
--- 557,563 ----
{
PGnotify *notify;
! while ((notify = PQnotifies(pset.c->db)))
{
fprintf(pset.queryFout, _("Asynchronous notification \"%s\" received from server process with PID %d.\n"),
notify->relname, notify->be_pid);
***************
*** 660,672 ****
case PGRES_COPY_OUT:
SetCancelConn();
! success = handleCopyOut(pset.db, pset.queryFout);
ResetCancelConn();
break;
case PGRES_COPY_IN:
SetCancelConn();
! success = handleCopyIn(pset.db, pset.cur_cmd_source,
PQbinaryTuples(results));
ResetCancelConn();
break;
--- 637,649 ----
case PGRES_COPY_OUT:
SetCancelConn();
! success = handleCopyOut(pset.c->db, pset.queryFout);
ResetCancelConn();
break;
case PGRES_COPY_IN:
SetCancelConn();
! success = handleCopyIn(pset.c->db, pset.cur_cmd_source,
PQbinaryTuples(results));
ResetCancelConn();
break;
***************
*** 761,779 ****
*
* Returns true if the query executed successfully, false otherwise.
*/
bool
SendQuery(const char *query)
{
PGresult *results;
! TimevalStruct before,
! after;
! bool OK,
! on_error_rollback_savepoint = false;
PGTransactionStatusType transaction_status;
- static bool on_error_rollback_warning = false;
const char *rollback_str;
! if (!pset.db)
{
psql_error("You are currently not connected to a database.\n");
return false;
--- 738,753 ----
*
* Returns true if the query executed successfully, false otherwise.
*/
+
bool
SendQuery(const char *query)
{
PGresult *results;
! bool OK;
PGTransactionStatusType transaction_status;
const char *rollback_str;
! if (!pset.c->db)
{
psql_error("You are currently not connected to a database.\n");
return false;
***************
*** 809,830 ****
SetCancelConn();
! transaction_status = PQtransactionStatus(pset.db);
if (transaction_status == PQTRANS_IDLE &&
!GetVariableBool(pset.vars, "AUTOCOMMIT") &&
!command_no_begin(query))
{
! results = PQexec(pset.db, "BEGIN");
if (PQresultStatus(results) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.db));
PQclear(results);
ResetCancelConn();
return false;
}
PQclear(results);
! transaction_status = PQtransactionStatus(pset.db);
}
if (transaction_status == PQTRANS_INTRANS &&
--- 783,804 ----
SetCancelConn();
! transaction_status = PQtransactionStatus(pset.c->db);
if (transaction_status == PQTRANS_IDLE &&
!GetVariableBool(pset.vars, "AUTOCOMMIT") &&
!command_no_begin(query))
{
! results = PQexec(pset.c->db, "BEGIN");
if (PQresultStatus(results) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.c->db));
PQclear(results);
ResetCancelConn();
return false;
}
PQclear(results);
! transaction_status = PQtransactionStatus(pset.c->db);
}
if (transaction_status == PQTRANS_INTRANS &&
***************
*** 834,867 ****
(pset.cur_cmd_interactive ||
pg_strcasecmp(rollback_str, "interactive") != 0))
{
! if (on_error_rollback_warning == false && pset.sversion < 80000)
{
fprintf(stderr, _("The server version (%d) does not support savepoints for ON_ERROR_ROLLBACK.\n"),
! pset.sversion);
! on_error_rollback_warning = true;
}
else
{
! results = PQexec(pset.db, "SAVEPOINT pg_psql_temporary_savepoint");
if (PQresultStatus(results) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.db));
PQclear(results);
ResetCancelConn();
return false;
}
PQclear(results);
! on_error_rollback_savepoint = true;
}
}
if (pset.timing)
! GETTIMEOFDAY(&before);
! results = PQexec(pset.db, query);
/* these operations are included in the timing result: */
! OK = (AcceptResult(results, query) && ProcessCopyResult(results));
if (pset.timing)
GETTIMEOFDAY(&after);
--- 808,879 ----
(pset.cur_cmd_interactive ||
pg_strcasecmp(rollback_str, "interactive") != 0))
{
! if (pset.c->on_error_rollback_warning == false && pset.c->sversion < 80000)
{
fprintf(stderr, _("The server version (%d) does not support savepoints for ON_ERROR_ROLLBACK.\n"),
! pset.c->sversion);
! pset.c->on_error_rollback_warning = true;
}
else
{
! results = PQexec(pset.c->db, "SAVEPOINT pg_psql_temporary_savepoint");
if (PQresultStatus(results) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.c->db));
PQclear(results);
ResetCancelConn();
return false;
}
PQclear(results);
! pset.c->on_error_rollback_savepoint = true;
}
}
if (pset.timing)
! GETTIMEOFDAY(&pset.c->before);
!
! OK = PQsendQuery(pset.c->db, query);
! if (!OK)
! return OK;
!
! if (pset.nowait)
! /* this is a single-shot option */
! pset.nowait = false;
! else
! OK = ReadQueryResults();
!
! return OK;
! }
!
!
! bool
! CheckQueryResults()
! {
! /* Check if we're actually looking for any results from the db */
! PQconsumeInput(pset.c->db);
! if (!PQisBusy(pset.c->db)) {
! return true;
! } else {
! return false;
! }
!
! }
!
!
!
! bool
! ReadQueryResults()
! {
! PGresult *results;
! TimevalStruct before,
! after;
! bool OK;
! PGTransactionStatusType transaction_status;
! results = PQgetResult(pset.c->db);
/* these operations are included in the timing result: */
! OK = (AcceptResult(results) && ProcessCopyResult(results));
if (pset.timing)
GETTIMEOFDAY(&after);
***************
*** 871,885 ****
OK = PrintQueryResults(results);
/* If we made a temporary savepoint, possibly release/rollback */
! if (on_error_rollback_savepoint)
{
PGresult *svptres;
! transaction_status = PQtransactionStatus(pset.db);
/* We always rollback on an error */
if (transaction_status == PQTRANS_INERROR)
! svptres = PQexec(pset.db, "ROLLBACK TO pg_psql_temporary_savepoint");
/* If they are no longer in a transaction, then do nothing */
else if (transaction_status != PQTRANS_INTRANS)
svptres = NULL;
--- 883,897 ----
OK = PrintQueryResults(results);
/* If we made a temporary savepoint, possibly release/rollback */
! if (pset.c->on_error_rollback_savepoint)
{
PGresult *svptres;
! transaction_status = PQtransactionStatus(pset.c->db);
/* We always rollback on an error */
if (transaction_status == PQTRANS_INERROR)
! svptres = PQexec(pset.c->db, "ROLLBACK TO pg_psql_temporary_savepoint");
/* If they are no longer in a transaction, then do nothing */
else if (transaction_status != PQTRANS_INTRANS)
svptres = NULL;
***************
*** 895,905 ****
strcmp(PQcmdStatus(results), "ROLLBACK") == 0)
svptres = NULL;
else
! svptres = PQexec(pset.db, "RELEASE pg_psql_temporary_savepoint");
}
if (svptres && PQresultStatus(svptres) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.db));
PQclear(results);
PQclear(svptres);
ResetCancelConn();
--- 907,917 ----
strcmp(PQcmdStatus(results), "ROLLBACK") == 0)
svptres = NULL;
else
! svptres = PQexec(pset.c->db, "RELEASE pg_psql_temporary_savepoint");
}
if (svptres && PQresultStatus(svptres) != PGRES_COMMAND_OK)
{
! psql_error("%s", PQerrorMessage(pset.c->db));
PQclear(results);
PQclear(svptres);
ResetCancelConn();
***************
*** 917,938 ****
/* check for events that may occur during query execution */
! if (pset.encoding != PQclientEncoding(pset.db) &&
! PQclientEncoding(pset.db) >= 0)
{
/* track effects of SET CLIENT_ENCODING */
! pset.encoding = PQclientEncoding(pset.db);
! pset.popt.topt.encoding = pset.encoding;
SetVariable(pset.vars, "ENCODING",
! pg_encoding_to_char(pset.encoding));
}
PrintNotifications();
!
return OK;
}
/*
* Advance the given char pointer over white space and SQL comments.
*/
--- 929,951 ----
/* check for events that may occur during query execution */
! if (pset.c->encoding != PQclientEncoding(pset.c->db) &&
! PQclientEncoding(pset.c->db) >= 0)
{
/* track effects of SET CLIENT_ENCODING */
! pset.c->encoding = PQclientEncoding(pset.c->db);
! pset.popt.topt.encoding = pset.c->encoding;
SetVariable(pset.vars, "ENCODING",
! pg_encoding_to_char(pset.c->encoding));
}
PrintNotifications();
!
return OK;
}
+
/*
* Advance the given char pointer over white space and SQL comments.
*/
***************
*** 943,949 ****
while (*query)
{
! int mblen = PQmblen(query, pset.encoding);
/*
* Note: we assume the encoding is a superset of ASCII, so that for
--- 956,962 ----
while (*query)
{
! int mblen = PQmblen(query, pset.c->encoding);
/*
* Note: we assume the encoding is a superset of ASCII, so that for
***************
*** 980,986 ****
query++;
break;
}
! query += PQmblen(query, pset.encoding);
}
}
else if (cnestlevel > 0)
--- 993,999 ----
query++;
break;
}
! query += PQmblen(query, pset.c->encoding);
}
}
else if (cnestlevel > 0)
***************
*** 1015,1021 ****
*/
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.encoding);
/*
* Transaction control commands. These should include every keyword that
--- 1028,1034 ----
*/
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.c->encoding);
/*
* Transaction control commands. These should include every keyword that
***************
*** 1046,1052 ****
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.encoding);
if (wordlen == 11 && pg_strncasecmp(query, "transaction", 11) == 0)
return true;
--- 1059,1065 ----
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.c->encoding);
if (wordlen == 11 && pg_strncasecmp(query, "transaction", 11) == 0)
return true;
***************
*** 1081,1087 ****
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.encoding);
if (wordlen == 8 && pg_strncasecmp(query, "database", 8) == 0)
return true;
--- 1094,1100 ----
wordlen = 0;
while (isalpha((unsigned char) query[wordlen]))
! wordlen += PQmblen(&query[wordlen], pset.c->encoding);
if (wordlen == 8 && pg_strncasecmp(query, "database", 8) == 0)
return true;
***************
*** 1106,1115 ****
{
const char *val;
! if (!pset.db)
return false;
! val = PQparameterStatus(pset.db, "is_superuser");
if (val && strcmp(val, "on") == 0)
return true;
--- 1119,1128 ----
{
const char *val;
! if (!pset.c->db)
return false;
! val = PQparameterStatus(pset.c->db, "is_superuser");
if (val && strcmp(val, "on") == 0)
return true;
***************
*** 1129,1138 ****
{
const char *val;
! if (!pset.db)
return false;
! val = PQparameterStatus(pset.db, "standard_conforming_strings");
if (val && strcmp(val, "on") == 0)
return true;
--- 1142,1151 ----
{
const char *val;
! if (!pset.c->db)
return false;
! val = PQparameterStatus(pset.c->db, "standard_conforming_strings");
if (val && strcmp(val, "on") == 0)
return true;
***************
*** 1153,1166 ****
{
const char *val;
! if (!pset.db)
return NULL;
! val = PQparameterStatus(pset.db, "session_authorization");
if (val)
return val;
else
! return PQuser(pset.db);
}
--- 1166,1179 ----
{
const char *val;
! if (!pset.c->db)
return NULL;
! val = PQparameterStatus(pset.c->db, "session_authorization");
if (val)
return val;
else
! return PQuser(pset.c->db);
}
Index: src/bin/psql/common.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/common.h,v
retrieving revision 1.50
diff -c -r1.50 common.h
*** src/bin/psql/common.h 14 Jun 2006 16:49:02 -0000 1.50
--- src/bin/psql/common.h 15 Aug 2006 11:37:38 -0000
***************
*** 55,60 ****
--- 55,62 ----
extern PGresult *PSQLexec(const char *query, bool start_xact);
extern bool SendQuery(const char *query);
+ extern bool CheckQueryResults(void);
+ extern bool ReadQueryResults(void);
extern bool is_superuser(void);
extern bool standard_strings(void);
Index: src/bin/psql/copy.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/copy.c,v
retrieving revision 1.66
diff -c -r1.66 copy.c
*** src/bin/psql/copy.c 14 Jun 2006 16:49:02 -0000 1.66
--- src/bin/psql/copy.c 15 Aug 2006 11:37:39 -0000
***************
*** 127,133 ****
result = pg_calloc(1, sizeof(struct copy_options));
token = strtokx(line, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
--- 127,133 ----
result = pg_calloc(1, sizeof(struct copy_options));
token = strtokx(line, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
***************
*** 135,141 ****
{
result->binary = true;
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
}
--- 135,141 ----
{
result->binary = true;
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
}
***************
*** 143,149 ****
result->table = pg_strdup(token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
--- 143,149 ----
result->table = pg_strdup(token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
***************
*** 156,167 ****
/* handle schema . table */
xstrcat(&result->table, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
xstrcat(&result->table, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
}
--- 156,167 ----
/* handle schema . table */
xstrcat(&result->table, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
xstrcat(&result->table, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
}
***************
*** 173,184 ****
for (;;)
{
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token || strchr(".,()", token[0]))
goto error;
xstrcat(&result->column_list, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
xstrcat(&result->column_list, token);
--- 173,184 ----
for (;;)
{
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token || strchr(".,()", token[0]))
goto error;
xstrcat(&result->column_list, token);
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
xstrcat(&result->column_list, token);
***************
*** 188,194 ****
goto error;
}
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.encoding);
if (!token)
goto error;
}
--- 188,194 ----
goto error;
}
token = strtokx(NULL, whitespace, ".,()", "\"",
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
}
***************
*** 199,211 ****
if (pg_strcasecmp(token, "with") == 0)
{
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
if (!token || pg_strcasecmp(token, "oids") != 0)
goto error;
result->oids = true;
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
if (!token)
goto error;
}
--- 199,211 ----
if (pg_strcasecmp(token, "with") == 0)
{
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
if (!token || pg_strcasecmp(token, "oids") != 0)
goto error;
result->oids = true;
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
if (!token)
goto error;
}
***************
*** 218,224 ****
goto error;
token = strtokx(NULL, whitespace, NULL, "'",
! 0, false, true, pset.encoding);
if (!token)
goto error;
--- 218,224 ----
goto error;
token = strtokx(NULL, whitespace, NULL, "'",
! 0, false, true, pset.c->encoding);
if (!token)
goto error;
***************
*** 242,248 ****
}
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
/*
* Allows old COPY syntax for backward compatibility 2002-06-19
--- 242,248 ----
}
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
/*
* Allows old COPY syntax for backward compatibility 2002-06-19
***************
*** 250,268 ****
if (token && pg_strcasecmp(token, "using") == 0)
{
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
if (!(token && pg_strcasecmp(token, "delimiters") == 0))
goto error;
}
if (token && pg_strcasecmp(token, "delimiters") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (!token)
goto error;
result->delim = pg_strdup(token);
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
}
if (token)
--- 250,268 ----
if (token && pg_strcasecmp(token, "using") == 0)
{
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
if (!(token && pg_strcasecmp(token, "delimiters") == 0))
goto error;
}
if (token && pg_strcasecmp(token, "delimiters") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (!token)
goto error;
result->delim = pg_strdup(token);
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
}
if (token)
***************
*** 273,279 ****
*/
if (pg_strcasecmp(token, "with") == 0)
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
while (token)
{
--- 273,279 ----
*/
if (pg_strcasecmp(token, "with") == 0)
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
while (token)
{
***************
*** 292,301 ****
else if (pg_strcasecmp(token, "delimiter") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token)
result->delim = pg_strdup(token);
else
--- 292,301 ----
else if (pg_strcasecmp(token, "delimiter") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token)
result->delim = pg_strdup(token);
else
***************
*** 304,313 ****
else if (pg_strcasecmp(token, "null") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token)
result->null = pg_strdup(token);
else
--- 304,313 ----
else if (pg_strcasecmp(token, "null") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token)
result->null = pg_strdup(token);
else
***************
*** 316,325 ****
else if (pg_strcasecmp(token, "quote") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token)
result->quote = pg_strdup(token);
else
--- 316,325 ----
else if (pg_strcasecmp(token, "quote") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token)
result->quote = pg_strdup(token);
else
***************
*** 328,337 ****
else if (pg_strcasecmp(token, "escape") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.encoding);
if (token)
result->escape = pg_strdup(token);
else
--- 328,337 ----
else if (pg_strcasecmp(token, "escape") == 0)
{
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token && pg_strcasecmp(token, "as") == 0)
token = strtokx(NULL, whitespace, NULL, "'",
! nonstd_backslash, true, false, pset.c->encoding);
if (token)
result->escape = pg_strdup(token);
else
***************
*** 340,346 ****
else if (pg_strcasecmp(token, "force") == 0)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (pg_strcasecmp(token, "quote") == 0)
{
/* handle column list */
--- 340,346 ----
else if (pg_strcasecmp(token, "force") == 0)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (pg_strcasecmp(token, "quote") == 0)
{
/* handle column list */
***************
*** 348,354 ****
for (;;)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (!token || strchr(",", token[0]))
goto error;
if (!result->force_quote_list)
--- 348,354 ----
for (;;)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (!token || strchr(",", token[0]))
goto error;
if (!result->force_quote_list)
***************
*** 356,362 ****
else
xstrcat(&result->force_quote_list, token);
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (!token || token[0] != ',')
break;
xstrcat(&result->force_quote_list, token);
--- 356,362 ----
else
xstrcat(&result->force_quote_list, token);
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (!token || token[0] != ',')
break;
xstrcat(&result->force_quote_list, token);
***************
*** 365,371 ****
else if (pg_strcasecmp(token, "not") == 0)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (pg_strcasecmp(token, "null") != 0)
goto error;
/* handle column list */
--- 365,371 ----
else if (pg_strcasecmp(token, "not") == 0)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (pg_strcasecmp(token, "null") != 0)
goto error;
/* handle column list */
***************
*** 373,379 ****
for (;;)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (!token || strchr(",", token[0]))
goto error;
if (!result->force_notnull_list)
--- 373,379 ----
for (;;)
{
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (!token || strchr(",", token[0]))
goto error;
if (!result->force_notnull_list)
***************
*** 381,387 ****
else
xstrcat(&result->force_notnull_list, token);
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.encoding);
if (!token || token[0] != ',')
break;
xstrcat(&result->force_notnull_list, token);
--- 381,387 ----
else
xstrcat(&result->force_notnull_list, token);
token = strtokx(NULL, whitespace, ",", "\"",
! 0, false, false, pset.c->encoding);
if (!token || token[0] != ',')
break;
xstrcat(&result->force_notnull_list, token);
***************
*** 395,401 ****
if (fetch_next)
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.encoding);
}
}
--- 395,401 ----
if (fetch_next)
token = strtokx(NULL, whitespace, NULL, NULL,
! 0, false, false, pset.c->encoding);
}
}
***************
*** 428,434 ****
((option[0] == 'E' || option[0] == 'e') && option[1] == '\''))
appendPQExpBufferStr(query, option);
else
! appendStringLiteralConn(query, option, pset.db);
}
--- 428,434 ----
((option[0] == 'E' || option[0] == 'e') && option[1] == '\''))
appendPQExpBufferStr(query, option);
else
! appendStringLiteralConn(query, option, pset.c->db);
}
***************
*** 551,562 ****
{
case PGRES_COPY_OUT:
SetCancelConn();
! success = handleCopyOut(pset.db, copystream);
ResetCancelConn();
break;
case PGRES_COPY_IN:
SetCancelConn();
! success = handleCopyIn(pset.db, copystream,
PQbinaryTuples(result));
ResetCancelConn();
break;
--- 551,562 ----
{
case PGRES_COPY_OUT:
SetCancelConn();
! success = handleCopyOut(pset.c->db, copystream);
ResetCancelConn();
break;
case PGRES_COPY_IN:
SetCancelConn();
! success = handleCopyIn(pset.c->db, copystream,
PQbinaryTuples(result));
ResetCancelConn();
break;
***************
*** 564,570 ****
case PGRES_FATAL_ERROR:
case PGRES_BAD_RESPONSE:
success = false;
! psql_error("\\copy: %s", PQerrorMessage(pset.db));
break;
default:
success = false;
--- 564,570 ----
case PGRES_FATAL_ERROR:
case PGRES_BAD_RESPONSE:
success = false;
! psql_error("\\copy: %s", PQerrorMessage(pset.c->db));
break;
default:
success = false;
Index: src/bin/psql/describe.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/describe.c,v
retrieving revision 1.141
diff -c -r1.141 describe.c
*** src/bin/psql/describe.c 17 Jul 2006 00:21:23 -0000 1.141
--- src/bin/psql/describe.c 15 Aug 2006 11:37:40 -0000
***************
*** 109,118 ****
PGresult *res;
printQueryOpt myopt = pset.popt;
! if (pset.sversion < 80000)
{
fprintf(stderr, _("The server version (%d) does not support tablespaces.\n"),
! pset.sversion);
return true;
}
--- 109,118 ----
PGresult *res;
printQueryOpt myopt = pset.popt;
! if (pset.c->sversion < 80000)
{
fprintf(stderr, _("The server version (%d) does not support tablespaces.\n"),
! pset.c->sversion);
return true;
}
***************
*** 464,470 ****
}
myopt.nullPrint = NULL;
! printfPQExpBuffer(&buf, _("Access privileges for database \"%s\""), PQdb(pset.db));
myopt.title = buf.data;
printQuery(res, &myopt, pset.queryFout, pset.logfile);
--- 464,470 ----
}
myopt.nullPrint = NULL;
! printfPQExpBuffer(&buf, _("Access privileges for database \"%s\""), PQdb(pset.c->db));
myopt.title = buf.data;
printQuery(res, &myopt, pset.queryFout, pset.logfile);
***************
*** 754,760 ****
"SELECT relhasindex, relkind, relchecks, reltriggers, relhasrules, \n"
"relhasoids %s \n"
"FROM pg_catalog.pg_class WHERE oid = '%s'",
! pset.sversion >= 80000 ? ", reltablespace" : "",
oid);
res = PSQLexec(buf.data, false);
if (!res)
--- 754,760 ----
"SELECT relhasindex, relkind, relchecks, reltriggers, relhasrules, \n"
"relhasoids %s \n"
"FROM pg_catalog.pg_class WHERE oid = '%s'",
! pset.c->sversion >= 80000 ? ", reltablespace" : "",
oid);
res = PSQLexec(buf.data, false);
if (!res)
***************
*** 776,782 ****
tableinfo.hasindex = strcmp(PQgetvalue(res, 0, 0), "t") == 0;
tableinfo.hasrules = strcmp(PQgetvalue(res, 0, 4), "t") == 0;
tableinfo.hasoids = strcmp(PQgetvalue(res, 0, 5), "t") == 0;
! tableinfo.tablespace = (pset.sversion >= 80000) ?
atooid(PQgetvalue(res, 0, 6)) : 0;
PQclear(res);
--- 776,782 ----
tableinfo.hasindex = strcmp(PQgetvalue(res, 0, 0), "t") == 0;
tableinfo.hasrules = strcmp(PQgetvalue(res, 0, 4), "t") == 0;
tableinfo.hasoids = strcmp(PQgetvalue(res, 0, 5), "t") == 0;
! tableinfo.tablespace = (pset.c->sversion >= 80000) ?
atooid(PQgetvalue(res, 0, 6)) : 0;
PQclear(res);
***************
*** 1907,1913 ****
if ((inquotes || force_escape) &&
strchr("|*+?()[]{}.^$\\", *cp))
appendPQExpBufferChar(&namebuf, '\\');
! i = PQmblen(cp, pset.encoding);
while (i-- && *cp)
{
appendPQExpBufferChar(&namebuf, *cp);
--- 1907,1913 ----
if ((inquotes || force_escape) &&
strchr("|*+?()[]{}.^$\\", *cp))
appendPQExpBufferChar(&namebuf, '\\');
! i = PQmblen(cp, pset.c->encoding);
while (i-- && *cp)
{
appendPQExpBufferChar(&namebuf, *cp);
***************
*** 1939,1953 ****
if (altnamevar)
{
appendPQExpBuffer(buf, "(%s ~ ", namevar);
! appendStringLiteralConn(buf, namebuf.data, pset.db);
appendPQExpBuffer(buf, "\n OR %s ~ ", altnamevar);
! appendStringLiteralConn(buf, namebuf.data, pset.db);
appendPQExpBuffer(buf, ")\n");
}
else
{
appendPQExpBuffer(buf, "%s ~ ", namevar);
! appendStringLiteralConn(buf, namebuf.data, pset.db);
appendPQExpBufferChar(buf, '\n');
}
}
--- 1939,1953 ----
if (altnamevar)
{
appendPQExpBuffer(buf, "(%s ~ ", namevar);
! appendStringLiteralConn(buf, namebuf.data, pset.c->db);
appendPQExpBuffer(buf, "\n OR %s ~ ", altnamevar);
! appendStringLiteralConn(buf, namebuf.data, pset.c->db);
appendPQExpBuffer(buf, ")\n");
}
else
{
appendPQExpBuffer(buf, "%s ~ ", namevar);
! appendStringLiteralConn(buf, namebuf.data, pset.c->db);
appendPQExpBufferChar(buf, '\n');
}
}
***************
*** 1970,1976 ****
{
WHEREAND();
appendPQExpBuffer(buf, "%s ~ ", schemavar);
! appendStringLiteralConn(buf, schemabuf.data, pset.db);
appendPQExpBufferChar(buf, '\n');
}
}
--- 1970,1976 ----
{
WHEREAND();
appendPQExpBuffer(buf, "%s ~ ", schemavar);
! appendStringLiteralConn(buf, schemabuf.data, pset.c->db);
appendPQExpBufferChar(buf, '\n');
}
}
Index: src/bin/psql/help.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/help.c,v
retrieving revision 1.114
diff -c -r1.114 help.c
*** src/bin/psql/help.c 15 Jul 2006 03:35:21 -0000 1.114
--- src/bin/psql/help.c 15 Aug 2006 11:37:41 -0000
***************
*** 172,178 ****
fprintf(output, _("General\n"));
fprintf(output, _(" \\c[onnect] [DBNAME|- USER|- HOST|- PORT|-]\n"
" connect to new database (currently \"%s\")\n"),
! PQdb(pset.db));
fprintf(output, _(" \\cd [DIR] change the current working directory\n"));
fprintf(output, _(" \\copyright show PostgreSQL usage and distribution terms\n"));
fprintf(output, _(" \\encoding [ENCODING]\n"
--- 172,178 ----
fprintf(output, _("General\n"));
fprintf(output, _(" \\c[onnect] [DBNAME|- USER|- HOST|- PORT|-]\n"
" connect to new database (currently \"%s\")\n"),
! PQdb(pset.c->db));
fprintf(output, _(" \\cd [DIR] change the current working directory\n"));
fprintf(output, _(" \\copyright show PostgreSQL usage and distribution terms\n"));
fprintf(output, _(" \\encoding [ENCODING]\n"
Index: src/bin/psql/large_obj.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/large_obj.c,v
retrieving revision 1.45
diff -c -r1.45 large_obj.c
*** src/bin/psql/large_obj.c 14 Jul 2006 14:52:26 -0000 1.45
--- src/bin/psql/large_obj.c 15 Aug 2006 11:37:41 -0000
***************
*** 28,40 ****
*own_transaction = false;
! if (!pset.db)
{
psql_error("%s: not connected to a database\n", operation);
return false;
}
! tstatus = PQtransactionStatus(pset.db);
switch (tstatus)
{
--- 28,40 ----
*own_transaction = false;
! if (!pset.c->db)
{
psql_error("%s: not connected to a database\n", operation);
return false;
}
! tstatus = PQtransactionStatus(pset.c->db);
switch (tstatus)
{
***************
*** 118,130 ****
return false;
SetCancelConn();
! status = lo_export(pset.db, atooid(loid_arg), filename_arg);
ResetCancelConn();
/* of course this status is documented nowhere :( */
if (status != 1)
{
! fputs(PQerrorMessage(pset.db), stderr);
return fail_lo_xact("\\lo_export", own_transaction);
}
--- 118,130 ----
return false;
SetCancelConn();
! status = lo_export(pset.c->db, atooid(loid_arg), filename_arg);
ResetCancelConn();
/* of course this status is documented nowhere :( */
if (status != 1)
{
! fputs(PQerrorMessage(pset.c->db), stderr);
return fail_lo_xact("\\lo_export", own_transaction);
}
***************
*** 154,165 ****
return false;
SetCancelConn();
! loid = lo_import(pset.db, filename_arg);
ResetCancelConn();
if (loid == InvalidOid)
{
! fputs(PQerrorMessage(pset.db), stderr);
return fail_lo_xact("\\lo_import", own_transaction);
}
--- 154,165 ----
return false;
SetCancelConn();
! loid = lo_import(pset.c->db, filename_arg);
ResetCancelConn();
if (loid == InvalidOid)
{
! fputs(PQerrorMessage(pset.c->db), stderr);
return fail_lo_xact("\\lo_import", own_transaction);
}
***************
*** 175,181 ****
return fail_lo_xact("\\lo_import", own_transaction);
sprintf(cmdbuf, "COMMENT ON LARGE OBJECT %u IS '", loid);
bufptr = cmdbuf + strlen(cmdbuf);
! bufptr += PQescapeStringConn(pset.db, bufptr, comment_arg, slen, NULL);
strcpy(bufptr, "'");
if (!(res = PSQLexec(cmdbuf, false)))
--- 175,181 ----
return fail_lo_xact("\\lo_import", own_transaction);
sprintf(cmdbuf, "COMMENT ON LARGE OBJECT %u IS '", loid);
bufptr = cmdbuf + strlen(cmdbuf);
! bufptr += PQescapeStringConn(pset.c->db, bufptr, comment_arg, slen, NULL);
strcpy(bufptr, "'");
if (!(res = PSQLexec(cmdbuf, false)))
***************
*** 215,226 ****
return false;
SetCancelConn();
! status = lo_unlink(pset.db, loid);
ResetCancelConn();
if (status == -1)
{
! fputs(PQerrorMessage(pset.db), stderr);
return fail_lo_xact("\\lo_unlink", own_transaction);
}
--- 215,226 ----
return false;
SetCancelConn();
! status = lo_unlink(pset.c->db, loid);
ResetCancelConn();
if (status == -1)
{
! fputs(PQerrorMessage(pset.c->db), stderr);
return fail_lo_xact("\\lo_unlink", own_transaction);
}
Index: src/bin/psql/mainloop.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/mainloop.c,v
retrieving revision 1.81
diff -c -r1.81 mainloop.c
*** src/bin/psql/mainloop.c 14 Jul 2006 14:52:26 -0000 1.81
--- src/bin/psql/mainloop.c 15 Aug 2006 11:37:41 -0000
***************
*** 14,19 ****
--- 14,22 ----
#include "input.h"
#include "settings.h"
+ /* XXX for PGASYNC_IDLE */
+ #include "libpq-int.h"
+
/*
* Main processing loop for reading lines of input
***************
*** 118,123 ****
--- 121,130 ----
fflush(stdout);
+ if (pset.c->db && pset.c->db->asyncStatus != PGASYNC_IDLE && CheckQueryResults()) {
+ ReadQueryResults();
+ }
+
/*
* get another line
*/
***************
*** 208,213 ****
--- 215,226 ----
(scan_result == PSCAN_EOL &&
GetVariableBool(pset.vars, "SINGLELINE")))
{
+
+ if (pset.c->db && pset.c->db->asyncStatus != PGASYNC_IDLE) {
+ /* XXX return value */
+ ReadQueryResults();
+ }
+
/*
* Save query in history. We use history_buf to accumulate
* multi-line queries into a single history entry.
***************
*** 323,329 ****
if (!success && die_on_error)
successResult = EXIT_USER;
/* Have we lost the db connection? */
! else if (!pset.db)
successResult = EXIT_BADCONN;
}
} /* while !endoffile/session */
--- 336,342 ----
if (!success && die_on_error)
successResult = EXIT_USER;
/* Have we lost the db connection? */
! else if (!pset.c->db)
successResult = EXIT_BADCONN;
}
} /* while !endoffile/session */
***************
*** 340,349 ****
/* execute query */
success = SendQuery(query_buf->data);
!
if (!success && die_on_error)
successResult = EXIT_USER;
! else if (pset.db == NULL)
successResult = EXIT_BADCONN;
}
--- 353,365 ----
/* execute query */
success = SendQuery(query_buf->data);
! /* synchronous command execution */
! if (success)
! success = ReadQueryResults();
!
if (!success && die_on_error)
successResult = EXIT_USER;
! else if (pset.c->db == NULL)
successResult = EXIT_BADCONN;
}
Index: src/bin/psql/prompt.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/prompt.c,v
retrieving revision 1.47
diff -c -r1.47 prompt.c
*** src/bin/psql/prompt.c 15 Jul 2006 03:35:21 -0000 1.47
--- src/bin/psql/prompt.c 15 Aug 2006 11:37:43 -0000
***************
*** 46,51 ****
--- 46,53 ----
* %x - transaction status: empty, *, !, ? (unknown or no connection)
* %? - the error code of the last query (not yet implemented)
* %% - a percent sign
+ * %& - a string like [n] indicating which database connection is active
+ only shown if multiple database connections are in use
*
* %[0-9] - the character with the given decimal code
* %0[0-7] - the character with the given octal code
***************
*** 110,137 ****
{
/* Current database */
case '/':
! if (pset.db)
! strncpy(buf, PQdb(pset.db), MAX_PROMPT_SIZE);
break;
case '~':
! if (pset.db)
{
const char *var;
! if (strcmp(PQdb(pset.db), PQuser(pset.db)) == 0 ||
! ((var = getenv("PGDATABASE")) && strcmp(var, PQdb(pset.db)) == 0))
strcpy(buf, "~");
else
! strncpy(buf, PQdb(pset.db), MAX_PROMPT_SIZE);
}
break;
/* DB server hostname (long/short) */
case 'M':
case 'm':
! if (pset.db)
{
! const char *host = PQhost(pset.db);
/* INET socket */
if (host && host[0] && !is_absolute_path(host))
--- 112,139 ----
{
/* Current database */
case '/':
! if (pset.c->db)
! strncpy(buf, PQdb(pset.c->db), MAX_PROMPT_SIZE);
break;
case '~':
! if (pset.c->db)
{
const char *var;
! if (strcmp(PQdb(pset.c->db), PQuser(pset.c->db)) == 0 ||
! ((var = getenv("PGDATABASE")) && strcmp(var, PQdb(pset.c->db)) == 0))
strcpy(buf, "~");
else
! strncpy(buf, PQdb(pset.c->db), MAX_PROMPT_SIZE);
}
break;
/* DB server hostname (long/short) */
case 'M':
case 'm':
! if (pset.c->db)
{
! const char *host = PQhost(pset.c->db);
/* INET socket */
if (host && host[0] && !is_absolute_path(host))
***************
*** 156,167 ****
break;
/* DB server port number */
case '>':
! if (pset.db && PQport(pset.db))
! strncpy(buf, PQport(pset.db), MAX_PROMPT_SIZE);
break;
/* DB server user name */
case 'n':
! if (pset.db)
strncpy(buf, session_username(), MAX_PROMPT_SIZE);
break;
--- 158,169 ----
break;
/* DB server port number */
case '>':
! if (pset.c->db && PQport(pset.c->db))
! strncpy(buf, PQport(pset.c->db), MAX_PROMPT_SIZE);
break;
/* DB server user name */
case 'n':
! if (pset.c->db)
strncpy(buf, session_username(), MAX_PROMPT_SIZE);
break;
***************
*** 180,186 ****
switch (status)
{
case PROMPT_READY:
! if (!pset.db)
buf[0] = '!';
else if (!GetVariableBool(pset.vars, "SINGLELINE"))
buf[0] = '=';
--- 182,188 ----
switch (status)
{
case PROMPT_READY:
! if (!pset.c->db)
buf[0] = '!';
else if (!GetVariableBool(pset.vars, "SINGLELINE"))
buf[0] = '=';
***************
*** 212,221 ****
break;
case 'x':
! if (!pset.db)
buf[0] = '?';
else
! switch (PQtransactionStatus(pset.db))
{
case PQTRANS_IDLE:
buf[0] = '\0';
--- 214,223 ----
break;
case 'x':
! if (!pset.c->db)
buf[0] = '?';
else
! switch (PQtransactionStatus(pset.c->db))
{
case PQTRANS_IDLE:
buf[0] = '\0';
***************
*** 284,289 ****
--- 286,306 ----
break;
}
+ case '&':
+ {
+ unsigned i, ncons=0, slot=0;
+ for (i=0;i<MAX_CONNECTIONS;i++)
+ {
+ ncons += cset[i].db!=NULL;
+ if (&cset[i] == pset.c)
+ slot = i+1;
+ }
+ psql_assert(slot > 0);
+ if (ncons>1 && slot) {
+ sprintf(buf,"[%d]",slot);
+ }
+ break;
+ }
case '[':
case ']':
#if defined(USE_READLINE) && defined(RL_PROMPT_START_IGNORE)
Index: src/bin/psql/psqlscan.l
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/psqlscan.l,v
retrieving revision 1.20
diff -c -r1.20 psqlscan.l
*** src/bin/psql/psqlscan.l 31 May 2006 22:11:44 -0000 1.20
--- src/bin/psql/psqlscan.l 15 Aug 2006 11:37:44 -0000
***************
*** 1020,1026 ****
psql_assert(state->buffer_stack == NULL);
/* Do we need to hack the character set encoding? */
! state->encoding = pset.encoding;
state->safe_encoding = PG_VALID_BE_ENCODING(state->encoding);
/* needed for prepare_buffer */
--- 1020,1026 ----
psql_assert(state->buffer_stack == NULL);
/* Do we need to hack the character set encoding? */
! state->encoding = pset.c->encoding;
state->safe_encoding = PG_VALID_BE_ENCODING(state->encoding);
/* needed for prepare_buffer */
***************
*** 1460,1466 ****
{
if (!inquotes && type == OT_SQLID)
*cp = pg_tolower((unsigned char) *cp);
! cp += PQmblen(cp, pset.encoding);
}
}
}
--- 1460,1466 ----
{
if (!inquotes && type == OT_SQLID)
*cp = pg_tolower((unsigned char) *cp);
! cp += PQmblen(cp, pset.c->encoding);
}
}
}
Index: src/bin/psql/settings.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/settings.h,v
retrieving revision 1.27
diff -c -r1.27 settings.h
*** src/bin/psql/settings.h 5 Mar 2006 15:58:52 -0000 1.27
--- src/bin/psql/settings.h 15 Aug 2006 11:37:44 -0000
***************
*** 22,36 ****
#define DEFAULT_EDITOR "vi"
#endif
! #define DEFAULT_PROMPT1 "%/%R%# "
#define DEFAULT_PROMPT2 "%/%R%# "
#define DEFAULT_PROMPT3 ">> "
typedef struct _psqlSettings
{
! PGconn *db; /* connection to backend */
! int encoding;
FILE *queryFout; /* where to send the query results */
bool queryFoutPipe; /* queryFout is from a popen() */
--- 22,70 ----
#define DEFAULT_EDITOR "vi"
#endif
! #define DEFAULT_PROMPT1 "%/%&%R%# "
#define DEFAULT_PROMPT2 "%/%R%# "
#define DEFAULT_PROMPT3 ">> "
+ /* Workarounds for Windows */
+ /* Probably to be moved up the source tree in the future, perhaps to be replaced by
+ * more specific checks like configure-style HAVE_GETTIMEOFDAY macros.
+ */
+ #ifndef WIN32
+
+ typedef struct timeval TimevalStruct;
+
+ #define GETTIMEOFDAY(T) gettimeofday(T, NULL)
+ #define DIFF_MSEC(T, U) \
+ ((((int) ((T)->tv_sec - (U)->tv_sec)) * 1000000.0 + \
+ ((int) ((T)->tv_usec - (U)->tv_usec))) / 1000.0)
+ #else
+
+ typedef struct _timeb TimevalStruct;
+
+ #define GETTIMEOFDAY(T) _ftime(T)
+ #define DIFF_MSEC(T, U) \
+ (((T)->time - (U)->time) * 1000.0 + \
+ ((T)->millitm - (U)->millitm))
+ #endif
+
+
+ typedef struct _psqlConnection
+ {
+ PGconn *db;
+ int slot;
+ int encoding;
+ int sversion;
+ bool on_error_rollback_warning;
+ bool on_error_rollback_savepoint;
+ TimevalStruct before, after;
+ } PsqlConnection;
+
typedef struct _psqlSettings
{
! PsqlConnection *c; /* Current database connection */
!
FILE *queryFout; /* where to send the query results */
bool queryFoutPipe; /* queryFout is from a popen() */
***************
*** 45,64 ****
FILE *cur_cmd_source; /* describe the status of the current main
* loop */
bool cur_cmd_interactive;
- int sversion; /* backend server version */
const char *progname; /* in case you renamed psql */
char *inputfile; /* for error reporting */
char *dirname; /* current directory for \s display */
unsigned lineno; /* also for error reporting */
bool timing; /* enable timing of all queries */
PGVerbosity verbosity; /* current error verbosity level */
FILE *logfile; /* session log file handle */
} PsqlSettings;
! extern PsqlSettings pset;
#define QUIET() (GetVariableBool(pset.vars, "QUIET"))
--- 79,102 ----
FILE *cur_cmd_source; /* describe the status of the current main
* loop */
bool cur_cmd_interactive;
const char *progname; /* in case you renamed psql */
char *inputfile; /* for error reporting */
char *dirname; /* current directory for \s display */
unsigned lineno; /* also for error reporting */
+ bool nowait; /* issue query asynchronously */
+
bool timing; /* enable timing of all queries */
PGVerbosity verbosity; /* current error verbosity level */
FILE *logfile; /* session log file handle */
} PsqlSettings;
! #define MAX_CONNECTIONS 10
!
! extern PsqlSettings pset;
! extern PsqlConnection cset[MAX_CONNECTIONS];
#define QUIET() (GetVariableBool(pset.vars, "QUIET"))
Index: src/bin/psql/startup.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/startup.c,v
retrieving revision 1.135
diff -c -r1.135 startup.c
*** src/bin/psql/startup.c 14 Jul 2006 14:52:26 -0000 1.135
--- src/bin/psql/startup.c 15 Aug 2006 11:37:44 -0000
***************
*** 42,47 ****
--- 42,48 ----
* Global psql options
*/
PsqlSettings pset;
+ PsqlConnection cset[MAX_CONNECTIONS];
#ifndef WIN32
#define SYSPSQLRC "psqlrc"
***************
*** 137,143 ****
setDecimalLocale();
pset.cur_cmd_source = stdin;
pset.cur_cmd_interactive = false;
! pset.encoding = PQenv2encoding();
pset.vars = CreateVariableSpace();
if (!pset.vars)
--- 138,147 ----
setDecimalLocale();
pset.cur_cmd_source = stdin;
pset.cur_cmd_interactive = false;
!
! pset.c = &cset[0];
!
! pset.c->encoding = PQenv2encoding();
pset.vars = CreateVariableSpace();
if (!pset.vars)
***************
*** 164,169 ****
--- 168,175 ----
pset.notty = (!isatty(fileno(stdin)) || !isatty(fileno(stdout)));
+ pset.nowait = false;
+
/* This is obsolete and should be removed sometime. */
#ifdef PSQL_ALWAYS_GET_PASSWORDS
pset.getPassword = true;
***************
*** 207,222 ****
do
{
need_pass = false;
! pset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,
options.action == ACT_LIST_DB && options.dbname == NULL ?
"postgres" : options.dbname,
username, password);
! if (PQstatus(pset.db) == CONNECTION_BAD &&
! strcmp(PQerrorMessage(pset.db), PQnoPasswordSupplied) == 0 &&
!feof(stdin))
{
! PQfinish(pset.db);
need_pass = true;
free(password);
password = NULL;
--- 213,228 ----
do
{
need_pass = false;
! pset.c->db = PQsetdbLogin(options.host, options.port, NULL, NULL,
options.action == ACT_LIST_DB && options.dbname == NULL ?
"postgres" : options.dbname,
username, password);
! if (PQstatus(pset.c->db) == CONNECTION_BAD &&
! strcmp(PQerrorMessage(pset.c->db), PQnoPasswordSupplied) == 0 &&
!feof(stdin))
{
! PQfinish(pset.c->db);
need_pass = true;
free(password);
password = NULL;
***************
*** 228,252 ****
free(password);
free(password_prompt);
! if (PQstatus(pset.db) == CONNECTION_BAD)
{
! fprintf(stderr, "%s: %s", pset.progname, PQerrorMessage(pset.db));
! PQfinish(pset.db);
exit(EXIT_BADCONN);
}
! PQsetNoticeProcessor(pset.db, NoticeProcessor, NULL);
SyncVariables();
- /* Grab the backend server version */
- pset.sversion = PQserverVersion(pset.db);
-
if (options.action == ACT_LIST_DB)
{
int success = listAllDbs(false);
! PQfinish(pset.db);
exit(success ? EXIT_SUCCESS : EXIT_FAILURE);
}
--- 234,255 ----
free(password);
free(password_prompt);
! if (PQstatus(pset.c->db) == CONNECTION_BAD)
{
! fprintf(stderr, "%s: %s", pset.progname, PQerrorMessage(pset.c->db));
! PQfinish(pset.c->db);
exit(EXIT_BADCONN);
}
! PQsetNoticeProcessor(pset.c->db, NoticeProcessor, NULL);
SyncVariables();
if (options.action == ACT_LIST_DB)
{
int success = listAllDbs(false);
! PQfinish(pset.c->db);
exit(success ? EXIT_SUCCESS : EXIT_FAILURE);
}
***************
*** 318,337 ****
{
int client_ver = parse_version(PG_VERSION);
! if (pset.sversion != client_ver)
{
const char *server_version;
char server_ver_str[16];
/* Try to get full text form, might include "devel" etc */
! server_version = PQparameterStatus(pset.db, "server_version");
if (!server_version)
{
snprintf(server_ver_str, sizeof(server_ver_str),
"%d.%d.%d",
! pset.sversion / 10000,
! (pset.sversion / 100) % 100,
! pset.sversion % 100);
server_version = server_ver_str;
}
--- 321,340 ----
{
int client_ver = parse_version(PG_VERSION);
! if (pset.c->sversion != client_ver)
{
const char *server_version;
char server_ver_str[16];
/* Try to get full text form, might include "devel" etc */
! server_version = PQparameterStatus(pset.c->db, "server_version");
if (!server_version)
{
snprintf(server_ver_str, sizeof(server_ver_str),
"%d.%d.%d",
! pset.c->sversion / 10000,
! (pset.c->sversion / 100) % 100,
! pset.c->sversion % 100);
server_version = server_ver_str;
}
***************
*** 348,358 ****
" \\g or terminate with semicolon to execute query\n"
" \\q to quit\n\n"));
! if (pset.sversion / 100 != client_ver / 100)
printf(_("WARNING: You are connected to a server with major version %d.%d,\n"
"but your %s client is major version %d.%d. Some backslash commands,\n"
"such as \\d, might not work properly.\n\n"),
! pset.sversion / 10000, (pset.sversion / 100) % 100,
pset.progname,
client_ver / 10000, (client_ver / 100) % 100);
--- 351,361 ----
" \\g or terminate with semicolon to execute query\n"
" \\q to quit\n\n"));
! if (pset.c->sversion / 100 != client_ver / 100)
printf(_("WARNING: You are connected to a server with major version %d.%d,\n"
"but your %s client is major version %d.%d. Some backslash commands,\n"
"such as \\d, might not work properly.\n\n"),
! pset.c->sversion / 10000, (pset.c->sversion / 100) % 100,
pset.progname,
client_ver / 10000, (client_ver / 100) % 100);
***************
*** 375,381 ****
/* clean up */
if (pset.logfile)
fclose(pset.logfile);
! PQfinish(pset.db);
setQFout(NULL);
return successResult;
--- 378,384 ----
/* clean up */
if (pset.logfile)
fclose(pset.logfile);
! PQfinish(pset.c->db);
setQFout(NULL);
return successResult;
***************
*** 732,738 ****
int sslbits = -1;
SSL *ssl;
! ssl = PQgetssl(pset.db);
if (!ssl)
return; /* no SSL */
--- 735,741 ----
int sslbits = -1;
SSL *ssl;
! ssl = PQgetssl(pset.c->db);
if (!ssl)
return; /* no SSL */
Index: src/bin/psql/tab-complete.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/psql/tab-complete.c,v
retrieving revision 1.154
diff -c -r1.154 tab-complete.c
*** src/bin/psql/tab-complete.c 14 Jul 2006 14:52:27 -0000 1.154
--- src/bin/psql/tab-complete.c 15 Aug 2006 11:37:45 -0000
***************
*** 2310,2319 ****
{
PGresult *result;
! if (query == NULL || !pset.db || PQstatus(pset.db) != CONNECTION_OK)
return NULL;
! result = PQexec(pset.db, query);
if (result != NULL && PQresultStatus(result) != PGRES_TUPLES_OK)
{
--- 2310,2319 ----
{
PGresult *result;
! if (query == NULL || !pset.c->db || PQstatus(pset.c->db) != CONNECTION_OK)
return NULL;
! result = PQexec(pset.c->db, query);
if (result != NULL && PQresultStatus(result) != PGRES_TUPLES_OK)
{
Is this something people are interested in? I am thinking no based on
the lack of requests and the size of the patch.
---------------------------------------------------------------------------
Gregory Stark wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
stark wrote:
So I hacked psql to issue queries asynchronously and allow multiple
database connections. That way you can switch connections while a blocked
or slow transaction is still running and issue queries in other
transactions.[snip]
Can you please put the patch up somewhere so people can see what's involved?
As promised:
[ Attachment, skipping... ]
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Bruce Momjian bruce@momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Bruce Momjian <bruce@momjian.us> writes:
Is this something people are interested in? I am thinking no based on
the lack of requests and the size of the patch.
Lack of requests? I was actually surprised by how enthusiastically people
reacted to it.
However I don't think the patch as is is ready to be committed. Aside from
missing documentation and regression tests it was only intended to be a
proof-of-concept and to be useful for specific tests I was doing.
I did try to do a decent job, I got \timing and server-tracked variables like
encoding. But I need to go back through the code and make sure there are no
other details like that.
It would be nice to get feedback from other developers from looking at the
patch to confirm that there aren't more fundamental problems with the approach
and how it uses libpq before I go through the effort of cleaning up the
details.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
On Sun, Sep 03, 2006 at 05:09:44PM -0400, Gregory Stark wrote:
Bruce Momjian <bruce@momjian.us> writes:
Is this something people are interested in? I am thinking no
based on the lack of requests and the size of the patch.Lack of requests? I was actually surprised by how enthusiastically
people reacted to it.
I think it could form the basis of some concurrency testing, something
we'll need more and more as time goes on. :)
Gregory,
Would you be up for getting this updated in the 8.3 cycle?
Cheers,
D
However I don't think the patch as is is ready to be committed. Aside from
missing documentation and regression tests it was only intended to be a
proof-of-concept and to be useful for specific tests I was doing.I did try to do a decent job, I got \timing and server-tracked variables like
encoding. But I need to go back through the code and make sure there are no
other details like that.It would be nice to get feedback from other developers from looking at the
patch to confirm that there aren't more fundamental problems with the approach
and how it uses libpq before I go through the effort of cleaning up the
details.--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
David Fetter <david@fetter.org> http://fetter.org/
phone: +1 415 235 3778 AIM: dfetter666
Skype: davidfetter
Remember to vote!