Database design and triggers...
Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.
Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?
My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?
Thanks Beforehand!
It's not clear to me how your data is organized or exactly what you're
counting. If I understand you correctly, yes, you could use triggers to
maintain a table in this manner. However, why can't you simply use a
SELECT query using the SUM() or COUNT() aggregate functions? If the
queries are slow, do some index tuning.
--
Brandon Aiken
CS/IT Systems Engineer
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Wednesday, September 06, 2006 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Database design and triggers...
Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.
Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?
My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?
Thanks Beforehand!
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Hi Brandon, thanks for answering.
The information regarding a transaction is stored on two separate
tables due to normalization. The first table stores general information
regarding the transaction like Transaction number, date, customer ID,
type of transaction, userID etc... The second table stores the details
of the purchase like the products and quantities that the transaction
affected. Like this:
Table 1:
Operation# Type_of_operation Client# Date UserID
1 Inbound 10 1/1/2000 Paul37
2 Outbound 10 1/2/2000 Steve04
Table 2:
Operation# Part# Lot# Qty
1 X a 10
1 Y ds1 9
1 Z 54ad 7
2 X a 10
Table 2 has Table 1's Operation field as a Foreign key.
Now, to obtain a current count of Part X we have to create a temporary
table on which we can use aggregate functions.
CurrentCountTable:
Operation Client# Date Part# Lot# Qty UserID
1 10 1/1/2000 X a +10 Paul37
1 10 1/1/2000 Y ds1 +9 Paul37
1 10 1/1/2000 Z 54as +7 Paul37
2 10 1/2/2000 X a -10 Steve04
Now, on the temporary table called CurrentCountTable we can use an
aggregate function, The problem is that creating this table is slow
with INSERT INTO, and so are aggregate functions (On MS Access). So
Naturally it occurred to me that triggers can keep a permanent version
of the CurrentCountTable up to date every time some one inserts in to
Table 1 and Table 2. But it has to be perfect to avoid inconsistencies.
So, are triggers a safe bet? Is using triggers more advisable over the
temporary table solution because on PSQL speed is not an issue? Or
should I use views?
Thanks!!! (Hope the tables got listed correctly :-/ )
"Brandon Aiken" wrote:
Show quoted text
It's not clear to me how your data is organized or exactly what you're
counting. If I understand you correctly, yes, you could use triggers to
maintain a table in this manner. However, why can't you simply use a
SELECT query using the SUM() or COUNT() aggregate functions? If the
queries are slow, do some index tuning.--
Brandon Aiken
CS/IT Systems Engineer-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Wednesday, September 06, 2006 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Database design and triggers...Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?Thanks Beforehand!
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
Roman --
You can certainly use a trigger to track changes (as well as creating an audit trail and enforcing rules); performance on inserts,updates and deletes will suffer accordingly since there's extra operations involved, but it definitely be a winner on the other end in generating reports and quick totals.
As long as data changes are properly rolled into a transaction I can't think of any obvious ways this setup would fail -- the trigger changes would also be committed or rolled back, but you do need to pay attention to when your trigger fires (before or after).
See for instance <http://www.postgresql.org/docs/8.1/interactive/triggers.html> (section 33 of the 8.1.4 documentation) for examples and a discussion of the different types.
And from the point of view of PostgreSQL function and procedure are used interchangably; its not like some languages in which procedures don't return values but functions always do. (Someone more knowledgable please correct me if I am wrong on this!).
HTH,
Greg Williamson
DBA
GlobeXplorer LLC
-----Original Message-----
From: pgsql-general-owner@postgresql.org on behalf of romantercero@gmail.com
Sent: Wed 9/6/2006 11:05 AM
To: pgsql-general@postgresql.org
Cc:
Subject: [GENERAL] Database design and triggers...
Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.
Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?
My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?
Thanks Beforehand!
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
-------------------------------------------------------
Click link below if it is SPAM gsw@globexplorer.com
"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=450038a9268108992556831&user=gsw@globexplorer.com&retrain=spam&template=history&history_page=1"
!DSPAM:450038a9268108992556831!
-------------------------------------------------------
It just occurred to me. The purpose of normalizing the information and
having it stored on two separate tables is defeated if I have another
table that contains the same information but in an un-normalized
fashion. I would end up with two copies of the same information, one
normalized but not ready to be used by aggregate functions and another
un-normalized table kept up to date with triggers and ready to be used
by aggregate functions. Right? :-/
So that leaves the temporary table solution on the front end or views
on the back end. Any thoughts?
Actually, I don't think you need Table 2 at all. Rather, you can fairly
easily incorporate all the functionality of CurrentCountTable into Table
2 and then use a query or a VIEW.
Say you have these two tables. [I'm not the best at data modeling yet,
so I wouldn't necessarily take these verbatim. I'm still a newbie.
Listed here is pseudo-SQL.]
TABLE "Transaction"
(
"TransactionID" serial,
"OperationID" integer,
"OperationType" char(15) NOT NULL,
"ClientID" integer NOT NULL,
"TransactionDate" date NOT NULL DEFAULT 'epoch',
"UserID" char(15) NOT NULL,
PRIMARY KEY ("TransactionID"),
UNIQUE ("OperationID")
)
TABLE "TransactionItem"
(
"TransactionItemID" serial,
"OperationID" integer NOT NULL,
"PartID" integer NOT NULL,
"LotID" integer NOT NULL,
"Qty" integer NOT NULL,
PRIMARY KEY ("TransItemID"),
FOREIGN KEY ("OperationID")
REFERENCES "Transaction" ("OperationID")
ON UPDATE CASCADE ON DELETE RESTRICT,
UNIQUE ("OperationID", "PartID")
)
Now, when you store Qty, you store a positive number if the inventory
increases and a negative number if it decreases.
Now, you can use a query or create a VIEW based on this query:
SELECT "OperationID"
, "ClientID"
, "TransactionDate"
, "PartID"
, "LotID"
, "Qty"
, "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";
Alternately, you can continue to store the Qty as an unsigned integer
and then use this query/VIEW:
SELECT "OperationID"
, "ClientID"
, "TransactionDate"
, "PartID"
, "LotID"
, CASE
WHEN "OperationType" = 'Incoming' THEN "Qty"
WHEN "OperationType" = 'Outgoing' THEN (-1 * "Qty")
END
, "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";
As far as speed, speed is always an issue. PostgreSQL is going to
perform better than Access, but don't use good performance as a crutch
for bad design.
As far as normalization, it is possible to take it too far. There is a
time when de-normalizing a database will significantly improve its
performance even if it involves duplicating data. 4NF is not the goal
of DB design, having a usable database is. Knowing when and how to
de-normalize is much more difficult than learning to design a normalized
data model.
--
Brandon Aiken
CS/IT Systems Engineer
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Thursday, September 07, 2006 7:09 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database design and triggers...
Hi Brandon, thanks for answering.
The information regarding a transaction is stored on two separate
tables due to normalization. The first table stores general information
regarding the transaction like Transaction number, date, customer ID,
type of transaction, userID etc... The second table stores the details
of the purchase like the products and quantities that the transaction
affected. Like this:
Table 1:
Operation# Type_of_operation Client# Date UserID
1 Inbound 10 1/1/2000
Paul37
2 Outbound 10 1/2/2000
Steve04
Table 2:
Operation# Part# Lot# Qty
1 X a 10
1 Y ds1 9
1 Z 54ad 7
2 X a 10
Table 2 has Table 1's Operation field as a Foreign key.
Now, to obtain a current count of Part X we have to create a temporary
table on which we can use aggregate functions.
CurrentCountTable:
Operation Client# Date Part# Lot# Qty UserID
1 10 1/1/2000 X a +10 Paul37
1 10 1/1/2000 Y ds1 +9 Paul37
1 10 1/1/2000 Z 54as +7 Paul37
2 10 1/2/2000 X a -10 Steve04
Now, on the temporary table called CurrentCountTable we can use an
aggregate function, The problem is that creating this table is slow
with INSERT INTO, and so are aggregate functions (On MS Access). So
Naturally it occurred to me that triggers can keep a permanent version
of the CurrentCountTable up to date every time some one inserts in to
Table 1 and Table 2. But it has to be perfect to avoid inconsistencies.
So, are triggers a safe bet? Is using triggers more advisable over the
temporary table solution because on PSQL speed is not an issue? Or
should I use views?
Thanks!!! (Hope the tables got listed correctly :-/ )
"Brandon Aiken" wrote:
It's not clear to me how your data is organized or exactly what you're
counting. If I understand you correctly, yes, you could use triggers
to
maintain a table in this manner. However, why can't you simply use a
SELECT query using the SUM() or COUNT() aggregate functions? If the
queries are slow, do some index tuning.--
Brandon Aiken
CS/IT Systems Engineer-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Wednesday, September 06, 2006 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Database design and triggers...Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in
two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason
a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures
and
Functions?
Thanks Beforehand!
---------------------------(end of
broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that
your
message can get through to the mailing list cleanly
---------------------------(end of
broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
Hi Brandon,
Thank you for your response.
Yes, I see what you mean when you say that I can list Qty in signed
integers and use a view. Good point.
But I'm not sure as to using a view vs using a permanent table updated
by triggers. It seems we are dealing with the clasic tradeoff between
speed & storage space.
A permanent table would duplicate data and even though it would toll
write operations I suspect it would be faster than users bringing up a
view every time they wish to execute a transaction.
Any Sugestions as to which way I should go on the tradeoff and why?
Thanx!
"Brandon Aiken" wrote:
Show quoted text
Actually, I don't think you need Table 2 at all. Rather, you can fairly
easily incorporate all the functionality of CurrentCountTable into Table
2 and then use a query or a VIEW.Say you have these two tables. [I'm not the best at data modeling yet,
so I wouldn't necessarily take these verbatim. I'm still a newbie.
Listed here is pseudo-SQL.]TABLE "Transaction"
(
"TransactionID" serial,
"OperationID" integer,
"OperationType" char(15) NOT NULL,
"ClientID" integer NOT NULL,
"TransactionDate" date NOT NULL DEFAULT 'epoch',
"UserID" char(15) NOT NULL,
PRIMARY KEY ("TransactionID"),
UNIQUE ("OperationID")
)TABLE "TransactionItem"
(
"TransactionItemID" serial,
"OperationID" integer NOT NULL,
"PartID" integer NOT NULL,
"LotID" integer NOT NULL,
"Qty" integer NOT NULL,
PRIMARY KEY ("TransItemID"),
FOREIGN KEY ("OperationID")
REFERENCES "Transaction" ("OperationID")
ON UPDATE CASCADE ON DELETE RESTRICT,
UNIQUE ("OperationID", "PartID")
)Now, when you store Qty, you store a positive number if the inventory
increases and a negative number if it decreases.Now, you can use a query or create a VIEW based on this query:
SELECT "OperationID"
, "ClientID"
, "TransactionDate"
, "PartID"
, "LotID"
, "Qty"
, "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";Alternately, you can continue to store the Qty as an unsigned integer
and then use this query/VIEW:SELECT "OperationID"
, "ClientID"
, "TransactionDate"
, "PartID"
, "LotID"
, CASE
WHEN "OperationType" = 'Incoming' THEN "Qty"
WHEN "OperationType" = 'Outgoing' THEN (-1 * "Qty")
END
, "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";As far as speed, speed is always an issue. PostgreSQL is going to
perform better than Access, but don't use good performance as a crutch
for bad design.As far as normalization, it is possible to take it too far. There is a
time when de-normalizing a database will significantly improve its
performance even if it involves duplicating data. 4NF is not the goal
of DB design, having a usable database is. Knowing when and how to
de-normalize is much more difficult than learning to design a normalized
data model.--
Brandon Aiken
CS/IT Systems Engineer-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Thursday, September 07, 2006 7:09 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database design and triggers...Hi Brandon, thanks for answering.
The information regarding a transaction is stored on two separate
tables due to normalization. The first table stores general information
regarding the transaction like Transaction number, date, customer ID,
type of transaction, userID etc... The second table stores the details
of the purchase like the products and quantities that the transaction
affected. Like this:Table 1:
Operation# Type_of_operation Client# Date UserID
1 Inbound 10 1/1/2000
Paul37
2 Outbound 10 1/2/2000
Steve04Table 2:
Operation# Part# Lot# Qty
1 X a 10
1 Y ds1 9
1 Z 54ad 7
2 X a 10Table 2 has Table 1's Operation field as a Foreign key.
Now, to obtain a current count of Part X we have to create a temporary
table on which we can use aggregate functions.CurrentCountTable:
Operation Client# Date Part# Lot# Qty UserID
1 10 1/1/2000 X a +10 Paul37
1 10 1/1/2000 Y ds1 +9 Paul37
1 10 1/1/2000 Z 54as +7 Paul37
2 10 1/2/2000 X a -10 Steve04Now, on the temporary table called CurrentCountTable we can use an
aggregate function, The problem is that creating this table is slow
with INSERT INTO, and so are aggregate functions (On MS Access). So
Naturally it occurred to me that triggers can keep a permanent version
of the CurrentCountTable up to date every time some one inserts in to
Table 1 and Table 2. But it has to be perfect to avoid inconsistencies.So, are triggers a safe bet? Is using triggers more advisable over the
temporary table solution because on PSQL speed is not an issue? Or
should I use views?Thanks!!! (Hope the tables got listed correctly :-/ )
"Brandon Aiken" wrote:
It's not clear to me how your data is organized or exactly what you're
counting. If I understand you correctly, yes, you could use triggersto
maintain a table in this manner. However, why can't you simply use a
SELECT query using the SUM() or COUNT() aggregate functions? If the
queries are slow, do some index tuning.--
Brandon Aiken
CS/IT Systems Engineer-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
romantercero@gmail.com
Sent: Wednesday, September 06, 2006 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Database design and triggers...Hi everybody. Hope I'm posting in the correct group.
My question is part design and part implementation.
Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored intwo
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reasona
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Proceduresand
Functions?
Thanks Beforehand!
---------------------------(end of
broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so thatyour
message can get through to the mailing list cleanly
---------------------------(end of
broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
Hi Brandon,
Thank you for your response.
Yes, I see what you mean when you say that I can list Qty in signed
integers and use a view. Good point.But I'm not sure as to using a view vs using a permanent table updated
by triggers. It seems we are dealing with the clasic tradeoff between
speed & storage space.A permanent table would duplicate data and even though it would toll
write operations I suspect it would be faster than users bringing up a
view every time they wish to execute a transaction.Any Sugestions as to which way I should go on the tradeoff and why?
A permanent table updated on triggers sounds like a "Materialized" View. A view must be process
everytime it is called. A Materialized view can have its data processed in various ways.
Depending upon your requirements, you can select a materialized view that will get you what you
want with a smaller preformance hit.
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
http://www.varlena.com/varlena/GeneralBits/64.php
http://www.revsys.com/blog/archive/9