MVCC works in serialized mode!
CVS is just updated...
Please try concurrent writes/reads...
It'll take some time to implement READ COMMITTED mode...
Note:
1. Vacuum is not updated yet and so will remove deleted tuples
though some xactions would like to see them in selects...
2. LOCK TABLE modes are not yet implemented...
3. As well as SELECT ... FOR UPDATE: I need in advice here!
Do Oracle, Informix etc support SELECT FOR UPDATE if
there are aggregates, group by, distinct in SELECT ???
Is SELECT FOR UPDATE allowed in subqueries ???
4. Rtree & gist indices use _relation level_ locking -
no wish, no time to change them...
Executor locks them in exclusive mode (on relation level)
for update/insert and so prevents deadlocks...
These locks released after _query_ is done - not so bad
for concurrency...
5. Hash & btree indices use _page level_ locking and so
are "opened" for deadlocks -:))
(In the case of concurrent inserts/updates when
indices are used for scans).
I hope to change btree - it's possible, -
but not hashes...
6. I'm not happy with current deadlock detection code!
It seems that backend does DeadLockCheck each time
when timer expired - shouldn't this be done _once_,
before backend is going to sleep ?!
7. As I read in Sybase documentation the default
standard transaction mode is _chained_ - all
queries before explicit COMMIT/ABORT are run in _single
transaction_... But we have to use BEGIN/END to get it!
This was not so bad for system with relation level locks,
but now only the same row writes block one other and so
chained mode seems more appropriate...
Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?
Vadim
6. I'm not happy with current deadlock detection code!
It seems that backend does DeadLockCheck each time
when timer expired - shouldn't this be done _once_,
before backend is going to sleep ?!
Not sure. Now that I think of it, it makes sense that if I go to sleep,
a deadlock is not sudenly going to appear while I am asleep. If a new
process causes a deadlock, the new process that causes it will see it.
I did not check when I went to sleep because I thought it may be too
cpu-intensive to do checking on every sleep, but now that I remember it,
it may be very trivial in cpu time to do the check on every sleep.
I recommend changing it to do it just before every sleep. Let me know
if you want me to make the change.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian wrote:
6. I'm not happy with current deadlock detection code!
It seems that backend does DeadLockCheck each time
when timer expired - shouldn't this be done _once_,
before backend is going to sleep ?!Not sure. Now that I think of it, it makes sense that if I go to sleep,
a deadlock is not sudenly going to appear while I am asleep. If a new
process causes a deadlock, the new process that causes it will see it.I did not check when I went to sleep because I thought it may be too
cpu-intensive to do checking on every sleep, but now that I remember it,
it may be very trivial in cpu time to do the check on every sleep.I recommend changing it to do it just before every sleep. Let me know
if you want me to make the change.
May be we could just reduce first sleep time (60 sec is too long),
do DeadLockCheck _only once_, after first SIGALARM, and after that
just sleep forever ?
Why do DeadLockCheck many times ?
Let's think more...
And please consider when lock conflict occures:
1. One process tries update row being updated by other.
2. When reading/writing hashes (I hope to change btrees to
use new buffer context lock code, as heap access methods
do, - this is short term locking without deadlocks and so -
without using lockmanager).
Vadim
Bruce Momjian wrote:
6. I'm not happy with current deadlock detection code!
It seems that backend does DeadLockCheck each time
when timer expired - shouldn't this be done _once_,
before backend is going to sleep ?!Not sure. Now that I think of it, it makes sense that if I go to sleep,
a deadlock is not sudenly going to appear while I am asleep. If a new
process causes a deadlock, the new process that causes it will see it.I did not check when I went to sleep because I thought it may be too
cpu-intensive to do checking on every sleep, but now that I remember it,
it may be very trivial in cpu time to do the check on every sleep.I recommend changing it to do it just before every sleep. Let me know
if you want me to make the change.May be we could just reduce first sleep time (60 sec is too long),
Yes, much too long. 5 or 15 seconds sounds good.
do DeadLockCheck _only once_, after first SIGALARM, and after that
just sleep forever ?
Why do DeadLockCheck many times ?
Yes, no need to do many times.
OK. Should I make the change?
Let's think more...
And please consider when lock conflict occures:
1. One process tries update row being updated by other.
2. When reading/writing hashes (I hope to change btrees to
use new buffer context lock code, as heap access methods
do, - this is short term locking without deadlocks and so -
without using lockmanager).
Oh. I would think even a 1 second deadlock detection would work well.
Most short-term locks don't last that long.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian wrote:
6. I'm not happy with current deadlock detection code!
It seems that backend does DeadLockCheck each time
when timer expired - shouldn't this be done _once_,
before backend is going to sleep ?!Not sure. Now that I think of it, it makes sense that if I go to sleep,
a deadlock is not sudenly going to appear while I am asleep. If a new
process causes a deadlock, the new process that causes it will see it.I did not check when I went to sleep because I thought it may be too
cpu-intensive to do checking on every sleep, but now that I remember it,
it may be very trivial in cpu time to do the check on every sleep.I recommend changing it to do it just before every sleep. Let me know
if you want me to make the change.May be we could just reduce first sleep time (60 sec is too long),
do DeadLockCheck _only once_, after first SIGALARM, and after that
just sleep forever ?
Why do DeadLockCheck many times ?Let's think more...
And please consider when lock conflict occures:
1. One process tries update row being updated by other.
2. When reading/writing hashes (I hope to change btrees to
use new buffer context lock code, as heap access methods
do, - this is short term locking without deadlocks and so -
without using lockmanager).Vadim
I have applied the following patch. It causes only one deadlock check
after a sleep of one second. Applied only to the CURRENT tree.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Attachments:
/tmp/xtext/plainDownload+31-32
Bruce Momjian wrote:
I have applied the following patch. It causes only one deadlock check
after a sleep of one second. Applied only to the CURRENT tree.
Nice, at least for now.
Vadim
Hi all,
-----Original Message-----
From: owner-pgsql-hackers@postgreSQL.org
[mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Vadim Mikheev
Sent: Wednesday, December 16, 1998 10:01 PM
To: hackers@postgreSQL.org
Subject: [HACKERS] MVCC works in serialized mode!CVS is just updated...
Please try concurrent writes/reads...
I happend to enjoy MVCC a little in (v6.4.1) .
Readers are never blocked and writers are blocked only by
same row writers as I expected.
It's so comfortable.
But I have a question.
Once transactions are blocked,it takes so long time to resume
after blocks were removed.
Why ?
Currently blocked transactions resume immediately after blocks
were removed.
Thanks.
Hiroshi Inoue
Inoue@tpf.co.jp
[Charset iso-8859-1 unsupported, filtering to ASCII...]
Hi all,
-----Original Message-----
From: owner-pgsql-hackers@postgreSQL.org
[mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Vadim Mikheev
Sent: Wednesday, December 16, 1998 10:01 PM
To: hackers@postgreSQL.org
Subject: [HACKERS] MVCC works in serialized mode!CVS is just updated...
Please try concurrent writes/reads...
I happend to enjoy MVCC a little in (v6.4.1) .
That's funny.
Readers are never blocked and writers are blocked only by
same row writers as I expected.
It's so comfortable.
Wow, that is cool.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Hiroshi Inoue wrote:
CVS is just updated...
Please try concurrent writes/reads...
I happend to enjoy MVCC a little in (v6.4.1) .
Readers are never blocked and writers are blocked only by
same row writers as I expected.
It's so comfortable.But I have a question.
Once transactions are blocked,it takes so long time to resume
after blocks were removed.
Why ?
Currently blocked transactions resume immediately after blocks
were removed.
What do you mean?
Example please...
Vadim
-----Original Message-----
From: root@dune.krs.ru [mailto:root@dune.krs.ru]On Behalf Of Vadim
Mikheev
Sent: Friday, December 25, 1998 2:40 PM
To: Hiroshi Inoue
Cc: hackers@postgreSQL.org
Subject: Re: [HACKERS] MVCC works in serialized mode!Hiroshi Inoue wrote:
CVS is just updated...
Please try concurrent writes/reads...
I happend to enjoy MVCC a little in (v6.4.1) .
Readers are never blocked and writers are blocked only by
same row writers as I expected.
It's so comfortable.But I have a question.
Once transactions are blocked,it takes so long time to resume
after blocks were removed.
Why ?
Currently blocked transactions resume immediately after blocks
were removed.What do you mean?
Example please...
inoue=> create table t1 (key int,a int);
inoue=> insert into t1 values (1,0);
<Session-1> <Session-2>
inoue=> begin; inoue=> begin;
BEGIN BEGIN
inoue=> update t1 set a=1 where key=1;
UPDATE 1
inoue=> update
t1 set a=2 where key=1;
[ blocked ]
inoue=> end/abort;
END/ABORT
[ after
long time .... ]
ERROR: Can't
serialize access due to concurrent update
/ UPDATE 1
Thanks.
Hiroshi Inoue
Inoue@tpf.co.jp
Hiroshi Inoue wrote:
But I have a question.
Once transactions are blocked,it takes so long time to resume
after blocks were removed.
Why ?
Currently blocked transactions resume immediately after blocks
were removed.What do you mean?
Example please...[ after long time .... ]
ERROR: Can't serialize access due to concurrent update
Just tried (with CURRENT) - no problems...
Vadim
2. LOCK TABLE modes are not yet implemented...
Implemented, as well as SET TRANSACTION ISOLATION LEVEL...
3. As well as SELECT ... FOR UPDATE: I need in advice here!
Do Oracle, Informix etc support SELECT FOR UPDATE if
there are aggregates, group by, distinct in SELECT ???
As I see in Oracle documentation FOR UPDATE is disallowed
in these cases... No objections ?
Is SELECT FOR UPDATE allowed in subqueries ???
Could someone test this?
7. As I read in Sybase documentation the default
standard transaction mode is _chained_ - all
queries before explicit COMMIT/ABORT are run in _single
transaction_... But we have to use BEGIN/END to get it!
This was not so bad for system with relation level locks,
but now only the same row writes block one other and so
chained mode seems more appropriate...Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?
No comments on this?
I would like to make BEGIN implicit...
Objections?
Vadim
7. As I read in Sybase documentation the default
standard transaction mode is _chained_ - all
queries before explicit COMMIT/ABORT are run in _single
transaction_... But we have to use BEGIN/END to get it!
This was not so bad for system with relation level locks,
but now only the same row writes block one other and so
chained mode seems more appropriate...Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?No comments on this?
I would like to make BEGIN implicit...
Objections?
Informix requires BEGIN. Ingres does not, but has 'set autocommit'
mode, so each SQL statement is its own transaction, like we have now.
I always felt that if I go in to psql and run a query, I want it
applied. I don't want to have to commit every query I type.
I know I can use SET TRANSACTION MODE to change this.
What do others think?
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?No comments on this?
I would like to make BEGIN implicit...
Objections?
Yes, I object. I think the default behavior should be the one that
is sufficient in most cases.
As to the isolation level:
Informix and DB/2 default to read committed mode. In most cases this
is
sufficient.
Therefore most clients don't need the extra set transaction
isolation statement.
As to transaction blocks:
Actually most clients never use commit work, and remember
that postgresql will never free it's memory before commit. This will
hog up
memory for otherwise lightweight clients.
I also think that the begin work; commit work; statement block is
somewhat part of the postgresql philosophy. We do not stand alone
with this approach (Informix).
Actually the other DB's start the transaction with the first
modifying
statement, in my opinion a rather weird approach.
I have seen programs that do a dummy update, just to simulate a
begin work, what a waste.
But this alone will not work, because a select statement, that needs
a sort would have started the transaction earlier (Since it created
an implicit
temp table).
I could go on, but you see I don't really like this ......
If you want to change the behavior, I think we will need a
changeable default
at database level.
Andreas
Import Notes
Resolved by subject fallback
Bruce Momjian wrote:
7. As I read in Sybase documentation the default
standard transaction mode is _chained_ - all
queries before explicit COMMIT/ABORT are run in _single
transaction_... But we have to use BEGIN/END to get it!
This was not so bad for system with relation level locks,
but now only the same row writes block one other and so
chained mode seems more appropriate...Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?No comments on this?
I would like to make BEGIN implicit...
Objections?Informix requires BEGIN. Ingres does not, but has 'set autocommit'
mode, so each SQL statement is its own transaction, like we have now.I always felt that if I go in to psql and run a query, I want it
applied. I don't want to have to commit every query I type.I know I can use SET TRANSACTION MODE to change this.
What do others think?
As long as we can switch. Interactive applications typically update more
than one objects per transaction.
--------
Regards
Theo
No comments on this?
I would like to make BEGIN implicit...
Objections?Informix requires BEGIN. Ingres does not, but has 'set autocommit'
mode, so each SQL statement is its own transaction, like we have now.
imho we should have an Ingres-like feature for "autocommit". We can have
the default be "set autocommit on" (probably with an equals sign like
our other "set" variables) and we can have it be a run-time option like
DATESTYLE and other settable parameters. So you can configure your
server or your client environment to always behave the way you prefer.
I'd be happy to implement the syntax and the environment variable parts;
just need a global boolean value to check and set...
- Tom
No time this year.
If necessary I will do it next year.Thanks.
Yes. I need to know if the code change between setitimer and select()
causes the difference you are seeing.
OK, I have re-coded that section. I had majorly broken it.
As part of the fix, I changed the deadlock detection in include/config.h
from 60 to 1. Remember, config.h is generated from config.h.in.
Can you change it back to 60 to see that the timeout is working
properly? The one-second default is not going to be noticable during
testing.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Import Notes
Reply to msg id not found: FromenvmaillistatDec281998112837pm | Resolved by subject fallback
Hello!
I want to make a filtered selection on a table. This can return with many
tuples. The download of the query results may take a few minutes. I'd like
to show the progress status. How can i do this?
If i run this in two query (first i count the number of lines, than the real
query), than the backend calculates the query two times.
If i use cursor, i can not get the line count before i fetch the whole table.
--
So long and thanx for all the fish.
NeKo@(kva.hu|kornel.szif.hu)
-----Original Message-----
From: Bruce Momjian [mailto:maillist@candle.pha.pa.us]
Sent: Wednesday, December 30, 1998 4:59 AM
To: PostgreSQL-development
Cc: Inoue@tpf.co.jp; vadim@krs.ru
Subject: Re: [HACKERS] MVCC works in serialized mode!No time this year.
If necessary I will do it next year.Thanks.
Yes. I need to know if the code change between setitimer and select()
causes the difference you are seeing.OK, I have re-coded that section. I had majorly broken it.
As part of the fix, I changed the deadlock detection in include/config.h
from 60 to 1. Remember, config.h is generated from config.h.in.Can you change it back to 60 to see that the timeout is working
properly? The one-second default is not going to be noticable during
testing.
I changed DEADLOCK_CHECK_TIMER in config.h to 60 and compiled.
Deadlock check takes about 60 seconds.
So timeout is set to 60 properly.
And the result is as follows.
create table t1 (key int,a int);
insert into t1 values (1,1);
Session-1 Session-2
begin; begin;
BEGIN BEGIN
update t1 set a=1;
UPDATE 1
update t1 set a=1;
[ update is blocked ]
abort;
ABORT
[ immediately ]
UPDATE 1
It's OK.
Thanks a lot.
Hiroshi Inoue
Inoue@tpf.co.jp
-----Original Message-----
From: owner-pgsql-hackers@postgreSQL.org
[mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Zeugswetter
Andreas IZ5
Sent: Monday, December 28, 1998 6:30 PM
To: hackers@postgreSQL.org
Cc: 'vadim@krs.ru'
Subject: Re: [HACKERS] MVCC works in serialized mode!Shouldn't we change default transaction mode now?
And use option/SET TRANSACTION MODE to switch to
un-chained mode if one like it?No comments on this?
I would like to make BEGIN implicit...
Objections?Yes, I object. I think the default behavior should be the one that
is sufficient in most cases.
I have a question about new default(isolation level / transaction mode).
How do we upgrade existent programs,if default is different
from current ?
I think the isolation level of current PostgreSQL is SERIALIZABLE
and the transaction mode is un-chained..
As to the isolation level,even SERIALIZABLE isolaton level can't guarantee
the integrity of current level.
So we must change existent programs anyway ?
Thanks.
Hiroshi Inoue
Inoue@tpf.co.jp
Show quoted text
As to the isolation level:
Informix and DB/2 default to read committed mode. In most cases this
is
sufficient.
Therefore most clients don't need the extra set transaction
isolation statement.As to transaction blocks:
Actually most clients never use commit work, and remember
that postgresql will never free it's memory before commit. This will
hog up
memory for otherwise lightweight clients.I also think that the begin work; commit work; statement block is
somewhat part of the postgresql philosophy. We do not stand alone
with this approach (Informix).
Actually the other DB's start the transaction with the first
modifying
statement, in my opinion a rather weird approach.
I have seen programs that do a dummy update, just to simulate a
begin work, what a waste.
But this alone will not work, because a select statement, that needsa sort would have started the transaction earlier (Since it created
an implicit
temp table).
I could go on, but you see I don't really like this ......If you want to change the behavior, I think we will need a
changeable default
at database level.Andreas