Limitation
I've been attempting to port applications from Pervasive SQL to PG.
Pervasive is interesting because it runs on top of btrieve. This allow
legacy apps
and SQL systems to co-exist. It's quirky and buggy, but it's better than PG
because it can do the following.
1. Have more than 7 fields on a btree index
2. There exists a table called inmaster it has a char(20) field called
stock_code.
pervasive can create a view (product) that splits the consituent characters
into fields.
PG cannot handle this. see below.
create view product as
Select code As Stock_Code,
Substr(Code,1,1) As Process_ID,
Substr(Code,2,2) As SubProcess_ID,
Substr(Code,4,1) As SubStrate_ID,
Substr(Code,5,2) As Length_ID,
Substr(Code,7,2) As Width_ID,
Substr(Code,9,2) As Thickness_ID,
Substr(Code,11,3) As Face_ID,
Substr(Code,14,1) As Facefinish_ID,
Substr(Code,15,3) As Back_ID
-- Substr(Code,18,1) As Backfinish_ID
From INMASTER;
ERROR: DefineQueryRewrite: rule plan string too big.
This should not occur. Would someone please pick it up and put it on the
TODO list.
Regards
John
John Huttley wrote:
PG cannot handle this. see below.
create view product as
Select code As Stock_Code,
Substr(Code,1,1) As Process_ID,
Substr(Code,2,2) As SubProcess_ID,
Substr(Code,4,1) As SubStrate_ID,
Substr(Code,5,2) As Length_ID,
Substr(Code,7,2) As Width_ID,
Substr(Code,9,2) As Thickness_ID,
Substr(Code,11,3) As Face_ID,
Substr(Code,14,1) As Facefinish_ID,
Substr(Code,15,3) As Back_ID
-- Substr(Code,18,1) As Backfinish_IDFrom INMASTER;
ERROR: DefineQueryRewrite: rule plan string too big.
John,
I generate a lot of sql create scripts from perl classes.
These scripts tend to hav a lot of whitespaces that make them beautiful
to read.
But just for me ... that's what PG means to this :
PQsendQuery() -- query is too long. Maximum length is 8191
So far, I got along very well just stripping unneeded whitespaces out of
the scripts before i run them.
Probably a simple s/^ // would already help in your case.
Chris
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Christian Rudow E-Mail: Christian.Rudow@thinx.ch
ThinX networked business services Stahlrain 10, CH-5200 Brugg
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Charset iso-8859-1 unsupported, filtering to ASCII...]
I've been attempting to port applications from Pervasive SQL to PG.
Pervasive is interesting because it runs on top of btrieve. This allow
legacy apps
and SQL systems to co-exist. It's quirky and buggy, but it's better than PG
because it can do the following.1. Have more than 7 fields on a btree index
We have never understood why someone would want an index with more than
seven columns.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
It's called an organicly grown schema. I recently had exposure to such a
beast at a customer's site. I won't say whom, but they are a large
supplier of medical infrastructure software. This software has been
hacked beyond recognition. It is on Oracle and front-ends are built with
power-builder, but they have ISAM/CICS and other stuff too. Their
original ratty design has been hacked, over the years, in an
uncontrolled environment, by every schlock hacker in the SF bay area.
I've seen as many as 12 columns in an index. They didn't want it, it's
just there and they have to deal with it somehow. A re-design is over 18
months and $5M, but they have to keep current production up as well.
Show quoted text
-----Original Message-----
From: owner-pgsql-general@postgreSQL.org
[mailto:owner-pgsql-general@postgreSQL.org]On Behalf Of Bruce Momjian
Sent: Thursday, June 24, 1999 8:40 AM
To: John Huttley
Cc: pgsql-general
Subject: Re: [GENERAL] Limitation[Charset iso-8859-1 unsupported, filtering to ASCII...]
I've been attempting to port applications from Pervasive SQL to PG.
Pervasive is interesting because it runs on top of btrieve.This allow
legacy apps
and SQL systems to co-exist. It's quirky and buggy, butit's better than PG
because it can do the following.
1. Have more than 7 fields on a btree index
We have never understood why someone would want an index with
more than
seven columns.-- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Thu, 24 Jun 1999, Bruce Momjian wrote:
# > 1. Have more than 7 fields on a btree index
#
# We have never understood why someone would want an index with more than
# seven columns.
Honestly, I think it's the same reason someone would want to
insert data as a single string, then create a view that does substrings to
produce the appearance of a multi-column table (which, I assume is then
selected on from certain fields, causing table scans). Sometimes, people
try to wedge efficiency into an inefficient design, which rarely works.
My approach here would be to substr before insertion, then make a view to
concatenate the results.
--
SA, beyond.com My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
The answer is that you are thinking as indexes in the context of fast lookup
on fields.
However my requirement comes from a need to ensure that every single field
in the
table forms a unique combination. (not my design, I might add)
Now you know why, can you append it to the TODO list?
On thinking about it, I wonder if the same field can appear more than
once in an index. Now that would be weird!
Show quoted text
1. Have more than 7 fields on a btree index
We have never understood why someone would want an index with more than
seven columns.--
Bruce Momjian | http://www.op.net/~candle
Import Notes
Resolved by subject fallback
Your idea could not be made to work in the actual
environment.
Everyone is ignoring the point of this.
PG doesn't do something(s) that a second-rate
commercial db can. And it does it very nicely.
Thats the point.
I'm looking for a hacker to say 'Thats interesting,
I'll have a look at sometime.'
Anybody interested?
Regards
Show quoted text
Honestly, I think it's the same reason someone would want to
insert data as a single string, then create a view that does substrings to
produce the appearance of a multi-column table (which, I assume is then
selected on from certain fields, causing table scans). Sometimes, people
try to wedge efficiency into an inefficient design, which rarely works.
My approach here would be to substr before insertion, then make a view to
concatenate the results.--
SA, beyond.com My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
Import Notes
Resolved by subject fallback
John,
Can't you do this check in a trigger? Have a select count(*) with a
where so that all columns are checked against the row being
edited/inserted. If the count is not zero then rejectr the
update/insert.
If you have one index which is for 7 fields (I guess the ones that
differ most often) then the select should be quite fast.
NB I have not written any postgresql triggers so I am not certain of
what is possible in them. But you could definately do this in the
various dbms I have written triggers for.
I realise this will be slow to write the code for so it would probably
be best to script the trigger definitions.
I came to sql from a background in DataFlex and Btrieve so I understand
many of the problems. We had similar indexes in some db's and it took a
while to get people used to designing without indexes and then add them
for speed.
Dave
John Huttley wrote:
The answer is that you are thinking as indexes in the context of fast lookup
on fields.However my requirement comes from a need to ensure that every single field
in the
table forms a unique combination. (not my design, I might add)Now you know why, can you append it to the TODO list?
On thinking about it, I wonder if the same field can appear more than
once in an index. Now that would be weird!1. Have more than 7 fields on a btree index
We have never understood why someone would want an index with more than
seven columns.--
Bruce Momjian | http://www.op.net/~candle
--
David Warnock
Sundayta Ltd
Thats a good idea!
I'll have a go and try it out.
However the 7 field limitation still needs resolving.
Regards
John
-----Original Message-----
From: David Warnock <david@sundayta.co.uk>
To: John Huttley <john@mwk.co.nz>
Cc: pgsql-general <pgsql-general@postgreSQL.org>
Date: Friday, 25 June 1999 10:33
Subject: Re: [GENERAL] Limitation
John,
Can't you do this check in a trigger? Have a select count(*) with a
where so that all columns are checked against the row being
edited/inserted. If the count is not zero then rejectr the
update/insert.If you have one index which is for 7 fields (I guess the ones that
differ most often) then the select should be quite fast.NB I have not written any postgresql triggers so I am not certain of
what is possible in them. But you could definately do this in the
various dbms I have written triggers for.I realise this will be slow to write the code for so it would probably
be best to script the trigger definitions.I came to sql from a background in DataFlex and Btrieve so I understand
many of the problems. We had similar indexes in some db's and it took a
while to get people used to designing without indexes and then add them
for speed.Dave
John Huttley wrote:
The answer is that you are thinking as indexes in the context of fast
lookup
on fields.
However my requirement comes from a need to ensure that every single
field
Show quoted text
in the
table forms a unique combination. (not my design, I might add)Now you know why, can you append it to the TODO list?
On thinking about it, I wonder if the same field can appear more than
once in an index. Now that would be weird!1. Have more than 7 fields on a btree index
We have never understood why someone would want an index with more than
seven columns.--
Bruce Momjian | http://www.op.net/~candle--
David Warnock
Sundayta Ltd
Import Notes
Resolved by subject fallback
On Fri, 25 Jun 1999, John Huttley wrote:
Then make a trigger. Your attitude towards this project would
make sense if you were a paying customer. If you want to use postgres,
then use it, nobody gets paid any less if you don't. If you want
structural changes in the database to accomodate a bad design, then you're
free to make them, you have the source.
# The answer is that you are thinking as indexes in the context of fast lookup
# on fields.
#
# However my requirement comes from a need to ensure that every single field
# in the
# table forms a unique combination. (not my design, I might add)
#
# Now you know why, can you append it to the TODO list?
#
# On thinking about it, I wonder if the same field can appear more than
# once in an index. Now that would be weird!
#
#
#
# >> 1. Have more than 7 fields on a btree index
# >
# >We have never understood why someone would want an index with more than
# >seven columns.
# >
# >--
# > Bruce Momjian | http://www.op.net/~candle
#
#
#
#
--
SA, beyond.com My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
[Charset iso-8859-1 unsupported, filtering to ASCII...]
The answer is that you are thinking as indexes in the context of fast lookup
on fields.However my requirement comes from a need to ensure that every single field
in the
table forms a unique combination. (not my design, I might add)
Ooow, that is a good point. Unique requires a unique index.
Now you know why, can you append it to the TODO list?
Can I get someone else to agree it should be added?
On thinking about it, I wonder if the same field can appear more than
once in an index. Now that would be weird!
I bet it can.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
-----Original Message-----
From: Dustin Sallings <dustin@spy.net>
To: John Huttley <john@mwk.co.nz>
Cc: Bruce Momjian <maillist@candle.pha.pa.us>; pgsql-general
<pgsql-general@postgreSQL.org>
Date: Friday, 25 June 1999 10:55
Subject: Re: [GENERAL] Limitation
Then make a trigger.
Because I didn't think of it. Mind in a rut, I suppose.
Your attitude towards this project would
make sense if you were a paying customer. If you want to use postgres,
then use it, nobody gets paid any less if you don't. If you want
structural changes in the database to accomodate a bad design, then you're
free to make them, you have the source.
PG 6.5 is now so _good_. The previously serious limits of table locking, no
subselects
and no currency type are history.
Its so tempting to treat it as Oracle-lite that when I hit a design
limitation its a bit of a shock.
Hackers need input from users, else they will never know what real world
problems are occuring.
I'd love to tear in there and change things, but I'm not a systems level
programmer, I'm a engineer
doing application coding. Understanding 'High Concurrency- Btrees', SQL
grammar and semantics etc is
not a thing that amateurs are equipped to do. The announcement for 6.5
implied much the same thing.
'mastery over the inherited code base'. --- after years of work. In fact,
looking at the Hackers mailing list shows that
most of the heavy work is done by about 4 people. Actually there are things
in libpq -large object support- I will
investigate. Thats very much nibbling around the edge.
So lets have a look at everything again. Consider it a users wish list for
7.0
1. The 7 field index limit. Doubtless someone made a decision back in the
dark ages that no-one would ever need
more than that.
2. Ruleplan overflows. maybe fixing this is just changing a #define to
allocate more space.
3. Parametised Stored Procedures that return record sets. This is a
significant item. Commercial db's can do it.
Porting modern applications to PG will require it.
4. Unlimited tuple size. I see that this is already on the list.
So lets see what happens.
Regards
Import Notes
Resolved by subject fallback
On Fri, 25 Jun 1999, John Huttley wrote:
Heh, that sounds kinda pleasant...it's so good you mistook it for
a commercial RDBMS. :)
# most of the heavy work is done by about 4 people. Actually there are
# things in libpq -large object support- I will investigate. Thats very
# much nibbling around the edge.
I've often thought about donig some work on the BLOB support, the
backend storage parts, but for my main project that would use them, I've
got pretty far into splitting and encoding the data. That's actually how
we do it in our Sybase databases because BLOBs are just too hard to deal
with with regards to replication, etc...
--
SA, beyond.com My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
So lets have a look at everything again. Consider it a users wish list for
7.01. The 7 field index limit. Doubtless someone made a decision back in the
dark ages that no-one would ever need
more than that.2. Ruleplan overflows. maybe fixing this is just changing a #define to
allocate more space.
This is on 6.6 item list.
3. Parametised Stored Procedures that return record sets. This is a
significant item. Commercial db's can do it.
Porting modern applications to PG will require it.4. Unlimited tuple size. I see that this is already on the list.
So is this one.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026