Locking entire database

Started by Panagiotis Pediaditisover 18 years ago17 messagesgeneral
Jump to latest
#1Panagiotis Pediaditis
pped@ics.forth.gr

Hello,

Is there some way of locking all database tables in a transaction
without knowing their names
or even better just locking the entire database? I know this is bad
tactics but there is a specific
case where i need it. Can it be done?

Thank you

Panagiotis

#2Rodrigo De León
rdeleonp@gmail.com
In reply to: Panagiotis Pediaditis (#1)
Re: Locking entire database

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

... there is a specific case where i need it.

Don't really know, but, explain what the case is, and maybe someone
could help you.

#3Panagiotis Pediaditis
pped@ics.forth.gr
In reply to: Rodrigo De León (#2)
Re: Locking entire database

Well the problem is I am working on rdf query engine for persistent RDF
data. The data is stored/structured in a specific way in the database.
When i perform updates in parallel, because there are cross table
dependencies, I end up with inconsistencies, For example One transaction
reads to see if there is a resource so as to add a property where it is
a subject. Then an other transaction deletes the resource after the
first has decided that the resource is there but before it added the
property.
Thus it would be helpful for me to avoid the difficult task of
dependency based locking and just lock the whole database.
any ideas?

Rodrigo De León wrote:

Show quoted text

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

... there is a specific case where i need it.

Don't really know, but, explain what the case is, and maybe someone
could help you.

#4Sibte Abbas
sibtay@gmail.com
In reply to: Panagiotis Pediaditis (#1)
Re: Locking entire database

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

Hello,

Is there some way of locking all database tables in a transaction
without knowing their names
or even better just locking the entire database? I know this is bad
tactics but there is a specific
case where i need it. Can it be done?

AFAIK Locking the entire database may not be an option in postgresql.
However you can virtually restrict access to everyone to a particular
database via pg_hba.conf.

More details here
http://www.postgresql.org/docs/8.2/static/auth-pg-hba-conf.html

regards,
--
Sibte Abbas

#5Martijn van Oosterhout
kleptog@svana.org
In reply to: Panagiotis Pediaditis (#3)
Re: Locking entire database

On Fri, Sep 14, 2007 at 05:45:07PM +0300, Panagiotis Pediaditis wrote:

Well the problem is I am working on rdf query engine for persistent RDF
data. The data is stored/structured in a specific way in the database.
When i perform updates in parallel, because there are cross table
dependencies, I end up with inconsistencies, For example One transaction
reads to see if there is a resource so as to add a property where it is
a subject. Then an other transaction deletes the resource after the
first has decided that the resource is there but before it added the
property.

Sounds like what you need is serializable transactions. Then the server
will tell you if something conflicts.

Have a ncie day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

From each according to his ability. To each according to his ability to litigate.

#6Thomas Kellerer
spam_eater@gmx.net
In reply to: Panagiotis Pediaditis (#3)
Re: Locking entire database

Panagiotis Pediaditis, 14.09.2007 16:45:

Well the problem is I am working on rdf query engine for persistent RDF
data. The data is stored/structured in a specific way in the database.
When i perform updates in parallel, because there are cross table
dependencies, I end up with inconsistencies, For example One transaction
reads to see if there is a resource so as to add a property where it is
a subject. Then an other transaction deletes the resource after the
first has decided that the resource is there but before it added the
property.
Thus it would be helpful for me to avoid the difficult task of
dependency based locking and just lock the whole database.
any ideas?

Hmm. To me this sounds like all those steps should in fact be _one_
transaction and not several transactions.

Thomas

#7Panagiotis Pediaditis
pped@ics.forth.gr
In reply to: Thomas Kellerer (#6)
Re: Locking entire database

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?
thnx

Panagiotis

Thomas Kellerer wrote:

Show quoted text

Panagiotis Pediaditis, 14.09.2007 16:45:

Well the problem is I am working on rdf query engine for persistent
RDF data. The data is stored/structured in a specific way in the
database. When i perform updates in parallel, because there are cross
table dependencies, I end up with inconsistencies, For example One
transaction reads to see if there is a resource so as to add a
property where it is a subject. Then an other transaction deletes the
resource after the first has decided that the resource is there but
before it added the property.
Thus it would be helpful for me to avoid the difficult task of
dependency based locking and just lock the whole database.
any ideas?

Hmm. To me this sounds like all those steps should in fact be _one_
transaction and not several transactions.

Thomas

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

#8Scott Marlowe
scott.marlowe@gmail.com
In reply to: Panagiotis Pediaditis (#7)
Re: Locking entire database

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

#9Panagiotis Pediaditis
pped@ics.forth.gr
In reply to: Scott Marlowe (#8)
Re: Locking entire database

Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)
Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

Thnx
Panagiotis

On Fri, 14 Sep 2007, Scott Marlowe wrote:

Show quoted text

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

#10Ron Johnson
ron.l.johnson@cox.net
In reply to: Panagiotis Pediaditis (#9)
Re: Locking entire database

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/15/07 03:28, Panagiwths Pediadiths wrote:

Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)

Same table?

Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

(Please don't top-post.)

Seems to me that you are confused as to the "essence" of relational
databases. In other words, the best (heck, even the acceptable) way
to design schemas, and how to control the flow of data in order to
achieve your ultimate "data" goal.

On Fri, 14 Sep 2007, Scott Marlowe wrote:

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG653ZS9HxQb37XmcRAjV+AJsFoJKc79XiGLFWSOT8Kfs0kxQItQCfWcJp
syO91mlpB6+P+n5tWh0fGSc=
=t8pG
-----END PGP SIGNATURE-----

In reply to: Panagiotis Pediaditis (#1)
Re: Locking entire database

On Fri, Sep 14, 2007 at 04:48:08PM +0300, Panagiotis Pediaditis wrote:

Is there some way of locking all database tables in a transaction
without knowing their names
or even better just locking the entire database? I know this is bad
tactics but there is a specific
case where i need it. Can it be done?

there is no such thing, but you can use "advisory locks" to this
purpose.

depesz

--
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA. here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

#12Merlin Moncure
mmoncure@gmail.com
In reply to: hubert depesz lubaczewski (#11)
Re: Locking entire database

On 9/15/07, hubert depesz lubaczewski <depesz@depesz.com> wrote:

On Fri, Sep 14, 2007 at 04:48:08PM +0300, Panagiotis Pediaditis wrote:

Is there some way of locking all database tables in a transaction
without knowing their names
or even better just locking the entire database? I know this is bad
tactics but there is a specific
case where i need it. Can it be done?

there is no such thing, but you can use "advisory locks" to this
purpose.

that would require taking a shared advisory lock for every
transaction...pretty big price to pay imo, but it would work. I
thought about it for a bit and I don't know if there is any other way
to do it.

merlin

#13Panagiotis Pediaditis
pped@ics.forth.gr
In reply to: Ron Johnson (#10)
Re: Locking entire database

On Sat, 15 Sep 2007, Ron Johnson wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/15/07 03:28, Panagiwths Pediadiths wrote:

Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)

Same table?

Yup

Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

(Please don't top-post.)

Seems to me that you are confused as to the "essence" of relational
databases. In other words, the best (heck, even the acceptable) way
to design schemas, and how to control the flow of data in order to
achieve your ultimate "data" goal.

I dont see why the case i suggest is so obscene
More specifically consider a table with to columns where the unique index
is the two columns together

However at some stage of the application I want to insert into the
database only if there is no element
with a value at column 1 equal to that that i intend to insert.

Oddly, in serializable isolation mode, two transactions performing such an
insertion in parallel one of the
two transaction hits the phantom read case, whereas it should be protected
by the isolation level.

Show quoted text

On Fri, 14 Sep 2007, Scott Marlowe wrote:

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG653ZS9HxQb37XmcRAjV+AJsFoJKc79XiGLFWSOT8Kfs0kxQItQCfWcJp
syO91mlpB6+P+n5tWh0fGSc=
=t8pG
-----END PGP SIGNATURE-----

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

#14Ron Johnson
ron.l.johnson@cox.net
In reply to: Panagiotis Pediaditis (#13)
Re: Locking entire database

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/15/07 19:59, Panagiwths Pediadiths wrote:

On Sat, 15 Sep 2007, Ron Johnson wrote:

On 09/15/07 03:28, Panagiwths Pediadiths wrote:

Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)

Same table?

Yup

Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

(Please don't top-post.)

Seems to me that you are confused as to the "essence" of relational
databases. In other words, the best (heck, even the acceptable) way
to design schemas, and how to control the flow of data in order to
achieve your ultimate "data" goal.

I dont see why the case i suggest is so obscene

Then you have not explained it to us clearly.

(English language difficulties *are* an acceptable excuse...)

More specifically consider a table with to columns where the unique index
is the two columns together

However at some stage of the application I want to insert into the
database only if there is no element
with a value at column 1 equal to that that i intend to insert.

Oddly, in serializable isolation mode, two transactions performing such an
insertion in parallel one of the
two transaction hits the phantom read case, whereas it should be protected
by the isolation level.

It should, *if* you do it properly.

IOW, is your program structured like:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
IF :cnt == 1 THEN
do one thing
ELSE
do another
END IF;
COMMIT;

or is it structured:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
COMMIT;
BEGIN
IF :cnt == 1 THEN
do one thing
ELSE
do another
END IF;
COMMIT;

On Fri, 14 Sep 2007, Scott Marlowe wrote:

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG7IaUS9HxQb37XmcRAq/bAJwNlJG2BNqfTbXPxd2sa6GsQn3nwQCfXaDo
BMR4Lple09XnPB5w11geonY=
=g8lJ
-----END PGP SIGNATURE-----

#15Panagiotis Pediaditis
pped@ics.forth.gr
In reply to: Ron Johnson (#14)
Re: Locking entire database

On Sat, 15 Sep 2007, Ron Johnson wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/15/07 19:59, Panagiwths Pediadiths wrote:

On Sat, 15 Sep 2007, Ron Johnson wrote:

On 09/15/07 03:28, Panagiwths Pediadiths wrote:

Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)

Same table?

Yup

Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

(Please don't top-post.)

Seems to me that you are confused as to the "essence" of relational
databases. In other words, the best (heck, even the acceptable) way
to design schemas, and how to control the flow of data in order to
achieve your ultimate "data" goal.

I dont see why the case i suggest is so obscene

Then you have not explained it to us clearly.

(English language difficulties *are* an acceptable excuse...)

More specifically consider a table with to columns where the unique index
is the two columns together

However at some stage of the application I want to insert into the
database only if there is no element
with a value at column 1 equal to that that i intend to insert.

Oddly, in serializable isolation mode, two transactions performing such an
insertion in parallel one of the
two transaction hits the phantom read case, whereas it should be protected
by the isolation level.

It should, *if* you do it properly.

IOW, is your program structured like:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
IF :cnt == 1 THEN
do one thing
ELSE
do another
END IF;
COMMIT;

or is it structured:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
COMMIT;
BEGIN
IF :cnt == 1 THEN
do one thing
ELSE
do another
END IF;
COMMIT;

Everything is done in the context of one transaction e.g.

BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

INSERT INTO table SELECT somwhere.value1, somewhere.value2 FROM
somewhere WHERE somewhere.value1 NOT IN ( SELECT table.segment1 FROM
table)

END

Many transactions doing this in parallel end up inserting the value many
times. Could i be missing something regarding how to set up the isolation
level?
Thanks!

Show quoted text

On Fri, 14 Sep 2007, Scott Marlowe wrote:

On 9/14/07, Panagiotis Pediaditis <pped@ics.forth.gr> wrote:

A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;

If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?

Unique index?

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG7IaUS9HxQb37XmcRAq/bAJwNlJG2BNqfTbXPxd2sa6GsQn3nwQCfXaDo
BMR4Lple09XnPB5w11geonY=
=g8lJ
-----END PGP SIGNATURE-----

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

#16Trevor Talbot
quension@gmail.com
In reply to: Panagiotis Pediaditis (#9)
Re: Locking entire database

On 9/15/07, Panagiwths Pediadiths <pped@ics.forth.gr> wrote:

Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.

http://www.postgresql.org/docs/8.2/static/transaction-iso.html#MVCC-SERIALIZABILITY

#17Martijn van Oosterhout
kleptog@svana.org
In reply to: Panagiotis Pediaditis (#15)
Re: Locking entire database

On Sun, Sep 16, 2007 at 01:46:44PM +0300, Panagiwths Pediadiths wrote:

Many transactions doing this in parallel end up inserting the value many
times. Could i be missing something regarding how to set up the isolation
level?
Thanks!

No, the only easy way you can guarentee you won't insert duplicates is with
a unique index. The reason is that even under serialisable mode your
algorithm can produce duplicates, because postgres doesn't do predicate
locking.

The reason unique indexes can do it is because they have special
locking requirements that, when met, guarentee the result. If for some
reason you can't use a unique index (can't think of one, but perhaps)
then you need to implement this locking yourself.

The easiest way is to have one lock and take it before running your
procedure. For more concurrency you can partition them (unique indexes
can be very efficient about this because they can see rows that you
can't; they go outside the normal visibility checks).

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

From each according to his ability. To each according to his ability to litigate.