Out of memory error on huge resultset
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error from
java.
If the whole ResultSet gets stashed in memory, this isn't really surprising,
but I'm wondering why this happens (if it does), rather than a subset around
the current record being cached and other rows being retrieved as needed.
If it turns out that there are good reasons for it to all be in memory, then
my question is whether there is a better approach that people typically use
in this situation. For now, I'm simply breaking up the select into smaller
chunks, but that approach won't be satisfactory in the long run.
Thanks
-Nick
--------------------------------------------------------------------------
Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788
Ray Ontko & Co. Software Consulting Services http://www.ontko.com/
Nick,
Use a cursor, the current driver doesn't support caching, the backend
gives you everything you ask for, you can't just say you want a limited
set.
So if you use cursors you can fetch a subset
Dave
Show quoted text
On Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error from
java.If the whole ResultSet gets stashed in memory, this isn't really surprising,
but I'm wondering why this happens (if it does), rather than a subset around
the current record being cached and other rows being retrieved as needed.If it turns out that there are good reasons for it to all be in memory, then
my question is whether there is a better approach that people typically use
in this situation. For now, I'm simply breaking up the select into smaller
chunks, but that approach won't be satisfactory in the long run.Thanks
-Nick
--------------------------------------------------------------------------
Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788
Ray Ontko & Co. Software Consulting Services http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Nick,
This has been discussed before on this list many times. But the short
answer is that that is how the postgres server handles queries. If you
issue a query the server will return the entire result. (try the same
query in psql and you will have the same problem). To work around this
you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE
sql commands for postgres).
thanks,
--Barry
Nick Fankhauser wrote:
Show quoted text
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error from
java.If the whole ResultSet gets stashed in memory, this isn't really surprising,
but I'm wondering why this happens (if it does), rather than a subset around
the current record being cached and other rows being retrieved as needed.If it turns out that there are good reasons for it to all be in memory, then
my question is whether there is a better approach that people typically use
in this situation. For now, I'm simply breaking up the select into smaller
chunks, but that approach won't be satisfactory in the long run.Thanks
-Nick
--------------------------------------------------------------------------
Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788
Ray Ontko & Co. Software Consulting Services http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Barry,
Is it true ?
I create table with one column varchar(500) and enter 1 milion rows with
length 10-20 character.JDBC query 'select * from a' get error 'out of
memory', but psql not.
I insert 8 milion rows and psql work fine yet (slow, but work)
In C library is 'execute query' without fetch - in jdbc execute fetch all rows
and this is problem - I think that executequery must prepare query and fetch
(ResultSet.next or ...) must fetch only fetchSize rows.
I am not sure, but I think that is problem with jdbc, not postgresql
Hackers ?
Does psql fetch all rows and if not how many ?
Can I change fetch size in psql ?
CURSOR , FETCH and MOVE isn't solution.
If I use jdbc in third-party IDE, I can't force this solution
regards
Show quoted text
On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:
Nick,
This has been discussed before on this list many times. But the short
answer is that that is how the postgres server handles queries. If you
issue a query the server will return the entire result. (try the same
query in psql and you will have the same problem). To work around this
you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE
sql commands for postgres).thanks,
--BarryNick Fankhauser wrote:
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error
from java.If the whole ResultSet gets stashed in memory, this isn't really
surprising, but I'm wondering why this happens (if it does), rather than
a subset around the current record being cached and other rows being
retrieved as needed.If it turns out that there are good reasons for it to all be in memory,
then my question is whether there is a better approach that people
typically use in this situation. For now, I'm simply breaking up the
select into smaller chunks, but that approach won't be satisfactory in
the long run.Thanks
-Nick
-------------------------------------------------------------------------
- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax
1.765.962.9788 Ray Ontko & Co. Software Consulting Services
http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Check out:
http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html
MySQL AB compares MySQL with PostgreSQL.
Quoted from one page
Because we couldn't get vacuum() to work reliable with PostgreSQL 7.1.1,
we haven't been able to generate a --fast version of the benchmarks yet
(where we would have done a vacuum() at critical places in the benchmark
to get better performance for PostgreSQL). We will do a new run of the
benchmarks as soon as the PostgreSQL developers can point out what we
have done wrong or have fixed vacuum() so that it works again.
and from another.
Drawbacks with PostgreSQL compared to MySQL Server:
VACUUM makes PostgreSQL hard to use in a 24/7 environment.
They also state that they have more sophisticated ALTER TABLE...
Only usable feature in their ALTER TABLE that doesn't (yet) exist in
PostgreSQL was changing column order (ok, the order by in table creation
could be nice), and that's still almost purely cosmetic. Anyway, I could
have used that command yesterday. Could this be added to pgsql.
MySQL supports data compression between front and back ends. This could be
easily implemented, or is it already supported?
I think all the other statements were misleading in the sense, that they
compared their newest product with PostgreSQL 7.1.1.
There's also following line:
PostgreSQL currently offers the following advantages over MySQL Server:
After which there's only one empty line.
Note that because we know the MySQL road map, we have included in the
following table the version when MySQL Server should support this
feature. Unfortunately we couldn't do this for
previous comparisons, because we don't know the PostgreSQL roadmap.
They could be provided one... ;-)
Upgrading MySQL Server is painless. When you are upgrading MySQL Server,
you don't need to dump/restore your data, as you have to do with most
PostgreSQL upgrades.
Ok... this is true, but not so hard - yesterday I installed 7.3b2 onto my
linux box.
Of course PostgreSQL isn't yet as fast as it could be. ;)
--
Antti Haapala
On Fri, 2002-10-11 at 09:20, Antti Haapala wrote:
Check out:
http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html
MySQL AB compares MySQL with PostgreSQL.
I wouldn't look too far into these at all. I've tried to get
' " as identifier quote (ANSI SQL) ' corrected on the crash-me pages for
us a couple of times (they say we don't support it for some reason).
I've not looked, but I thought 7.1 supported rename table as well.
Anyway, max table row length was wrong with 7.1 wrong too unless I'm
confused as to what a blob is (is text and varchar a blob -- what about
your own 10Mb fixed length datatype -- how about a huge array of
integers if the previous are considered blobs?)
--
Rod Taylor
On 11 Oct 2002 at 16:20, Antti Haapala wrote:
Check out:
http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html
Well, I guess there are many threads on this. You can dig around archives..
Upgrading MySQL Server is painless. When you are upgrading MySQL Server,
you don't need to dump/restore your data, as you have to do with most
PostgreSQL upgrades.Ok... this is true, but not so hard - yesterday I installed 7.3b2 onto my
linux box.
Well, that remains as a point. Imagine a 100GB database on a 150GB disk array.
How do you dump and reload? In place conversion of data is an absolute
necessary feature and it's already on TODO.
Of course PostgreSQL isn't yet as fast as it could be. ;)
Check few posts I have made in last three weeks. You will find that postgresql
is fast enough to surpass mysql in what are considered as mysql strongholds. Of
course it's not a handy win but for sure, postgresql is not slow.
And for vacuum thing, I have written a autovacuum daemon that can automatically
vacuum databases depending upon their activity. Check it at
gborg.postgresql.org. (I can't imagine this as an advertisement of myself but
looks like the one)
Let thread be rested. Postgresql certaily needs some maketing hand but refuting
claims in that article is not the best way to start it. I guess most hackers
would agree with this..
Bye
Shridhar
--
Cat, n.: Lapwarmer with built-in buzzer.
No disadvantage, in fact that is what we would like to do.
setFetchSize(size) turns on cursor support, otherwise fetch normally
Dave
Show quoted text
On Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:
What would be the disadvantage of making the JDBC driver use a
cursor under the covers (always)? Is it significantly slower or more
resource-intensive than fetching all the data at once? Certainly it seems
like it would save memory in some cases.Aaron
On 10 Oct 2002, Dave Cramer wrote:
Nick,
Use a cursor, the current driver doesn't support caching, the backend
gives you everything you ask for, you can't just say you want a limited
set.So if you use cursors you can fetch a subset
Dave
On Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error from
java.If the whole ResultSet gets stashed in memory, this isn't really surprising,
but I'm wondering why this happens (if it does), rather than a subset around
the current record being cached and other rows being retrieved as needed.If it turns out that there are good reasons for it to all be in memory, then
my question is whether there is a better approach that people typically use
in this situation. For now, I'm simply breaking up the select into smaller
chunks, but that approach won't be satisfactory in the long run.Thanks
-Nick
--------------------------------------------------------------------------
Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788
Ray Ontko & Co. Software Consulting Services http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
Import Notes
Reply to msg id not found: Pine.LNX.4.44.0210111029160.695-100000@www.princetongames.orgReference msg id not found: Pine.LNX.4.44.0210111029160.695-100000@www.princetongames.org | Resolved by subject fallback
What would be the disadvantage of making the JDBC driver use a
cursor under the covers (always)? Is it significantly slower or more
resource-intensive than fetching all the data at once? Certainly it seems
like it would save memory in some cases.
Aaron
Show quoted text
On 10 Oct 2002, Dave Cramer wrote:
Nick,
Use a cursor, the current driver doesn't support caching, the backend
gives you everything you ask for, you can't just say you want a limited
set.So if you use cursors you can fetch a subset
Dave
On Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error from
java.If the whole ResultSet gets stashed in memory, this isn't really surprising,
but I'm wondering why this happens (if it does), rather than a subset around
the current record being cached and other rows being retrieved as needed.If it turns out that there are good reasons for it to all be in memory, then
my question is whether there is a better approach that people typically use
in this situation. For now, I'm simply breaking up the select into smaller
chunks, but that approach won't be satisfactory in the long run.Thanks
-Nick
--------------------------------------------------------------------------
Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788
Ray Ontko & Co. Software Consulting Services http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
On Fri, 2002-10-11 at 08:20, Antti Haapala wrote:
Quoted from one page
Because we couldn't get vacuum() to work reliable with PostgreSQL 7.1.1,
I have little respect for the MySQL advocacy guys. They purposely
spread misinformation. They always compare their leading edge alpha
software against Postgres' year+ old stable versions. In some cases,
I've seen them compare their alpha (4.x) software against 7.0. Very sad
that these people can't even attempt to be honest.
In the case above, since they are comparing 4.x, they should be
comparing it to 7.x at least. It's also very sad that their testers
don't seem to even understand something as simple as cron. If they
can't understand something as simple as cron, I fear any conclusions
they may arrive at throughout their testing (destined to be
incorrect/invalid).
MySQL supports data compression between front and back ends. This could be
easily implemented, or is it already supported?
Mammoth has such a feature...or at least it's been in development for a
while. If I understood them correctly, it will be donated back to core
sometime in the 7.5 or 7.7 series. Last I heard, their results were
absolutely wonderful.
I think all the other statements were misleading in the sense, that they
compared their newest product with PostgreSQL 7.1.1.
Ya, historically, they go out of their way to ensure unfair
comparisons. I have no respect for them.
They could be provided one... ;-)
In other words, they need a list of features that they can one day hope
to add to MySQL.
Upgrading MySQL Server is painless. When you are upgrading MySQL Server,
you don't need to dump/restore your data, as you have to do with most
PostgreSQL upgrades.Ok... this is true, but not so hard - yesterday I installed 7.3b2 onto my
linux box.Of course PostgreSQL isn't yet as fast as it could be. ;)
I consider this par for the course. This is something I've had to do
with Sybase, Oracle and MSSQL.
Greg
Dave Cramer <Dave@micro-automation.net> wrote:
No disadvantage, in fact that is what we would like to do.
setFetchSize(size) turns on cursor support, otherwise fetch normally
Dave
On Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:
What would be the disadvantage of making the JDBC driver use a
cursor under the covers (always)? Is it significantly slower or more
resource-intensive than fetching all the data at once? Certainly it
seems
like it would save memory in some cases.
Aaron
Well, using a cursor based result set *always* is not going to work. Cursors
will not be held over a commit, whereas a buffer result set will. So the
setFetchSize..
Regards,
Michael Paesold
Michael,
You are correct, commit will effectively close the cursor.
This is the only way to deal with large result sets however.
Dave
Show quoted text
On Fri, 2002-10-11 at 10:44, Michael Paesold wrote:
Dave Cramer <Dave@micro-automation.net> wrote:
No disadvantage, in fact that is what we would like to do.
setFetchSize(size) turns on cursor support, otherwise fetch normally
Dave
On Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:
What would be the disadvantage of making the JDBC driver use a
cursor under the covers (always)? Is it significantly slower or more
resource-intensive than fetching all the data at once? Certainly itseems
like it would save memory in some cases.
Aaron
Well, using a cursor based result set *always* is not going to work. Cursors
will not be held over a commit, whereas a buffer result set will. So the
setFetchSize..Regards,
Michael Paesold
Dave Cramer <Dave@micro-automation.net> writes:
No disadvantage, in fact that is what we would like to do.
setFetchSize(size) turns on cursor support, otherwise fetch
normally
I love PostgreSQL's default behaviour: it's great to know
that a server side cursor resource is probably not hanging around for
simple querys.
Nic
Just so you know this isn't implemented yet. My reference to
setFetchSize below was just a suggestion as to how to implement it
Dave
Show quoted text
On Fri, 2002-10-11 at 10:52, nferrier@tapsellferrier.co.uk wrote:
Dave Cramer <Dave@micro-automation.net> writes:
No disadvantage, in fact that is what we would like to do.
setFetchSize(size) turns on cursor support, otherwise fetch
normallyI love PostgreSQL's default behaviour: it's great to know
that a server side cursor resource is probably not hanging around for
simple querys.Nic
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Nic,
That would be great!
Dave
Show quoted text
On Fri, 2002-10-11 at 10:57, nferrier@tapsellferrier.co.uk wrote:
Dave Cramer <Dave@micro-automation.net> writes:
Just so you know this isn't implemented yet. My reference to
setFetchSize below was just a suggestion as to how to implement itI'll do it if you like.
Nic
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Import Notes
Reply to msg id not found: u4rbtxbte.fsf@tapsellferrier.co.uk
Rod Taylor wrote:
On Fri, 2002-10-11 at 09:20, Antti Haapala wrote:
Check out:
http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html
MySQL AB compares MySQL with PostgreSQL.
I wouldn't look too far into these at all. I've tried to get
' " as identifier quote (ANSI SQL) ' corrected on the crash-me pages for
us a couple of times (they say we don't support it for some reason).
It's once again the typical MySQL propaganda. As usual they compare a
future version of MySQL against an old release of PostgreSQL. And they
just compare on buzzword level.
Do their foreign keys have referential actions and deferrability? Is log
based master slave replication all there can be?
And surely do we have something that compares to *their* roadmap. That
they cannot find it is because it's named HISTORY.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #
Dave Cramer <Dave@micro-automation.net> writes:
Just so you know this isn't implemented yet. My reference to
setFetchSize below was just a suggestion as to how to implement it
I'll do it if you like.
Nic
At 08:27 AM 10/11/2002, snpe wrote:
Barry,
Is it true ?
I create table with one column varchar(500) and enter 1 milion rows with
length 10-20 character.JDBC query 'select * from a' get error 'out of
memory', but psql not.
I insert 8 milion rows and psql work fine yet (slow, but work)
The way the code works in JDBC is, in my opinion, a little poor but
possibly mandated by JDBC design specs.
It reads the entire result set from the database backend and caches it in a
horrible Vector (which should really be a List and which should at least
make an attempt to get the # of rows ahead of time to avoid all the
resizing problems).
Then, it doles it out from memory as you go through the ResultSet with the
next() method.
I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE
WHOLE THING - through the result set as each row is returned from the
backend, thus ensuring that you never use much more memory than one line.
EVEN IF you have to keep the connection locked.
The latter is what I expected it to do. The former is what it does. So, it
necessitates you creating EVERY SELECT query which you think has more than
a few rows (or which you think COULD have more than a few rows, "few" being
defined by our VM memory limits) into a cursor based query. Really klugy. I
intend to write a class to do that for every SELECT query for me automatically.
Cheers,
Doug
Show quoted text
In C library is 'execute query' without fetch - in jdbc execute fetch all
rows
and this is problem - I think that executequery must prepare query and fetch
(ResultSet.next or ...) must fetch only fetchSize rows.
I am not sure, but I think that is problem with jdbc, not postgresql
Hackers ?
Does psql fetch all rows and if not how many ?
Can I change fetch size in psql ?
CURSOR , FETCH and MOVE isn't solution.
If I use jdbc in third-party IDE, I can't force this solutionregards
On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:
Nick,
This has been discussed before on this list many times. But the short
answer is that that is how the postgres server handles queries. If you
issue a query the server will return the entire result. (try the same
query in psql and you will have the same problem). To work around this
you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE
sql commands for postgres).thanks,
--BarryNick Fankhauser wrote:
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error
from java.If the whole ResultSet gets stashed in memory, this isn't really
surprising, but I'm wondering why this happens (if it does), rather than
a subset around the current record being cached and other rows being
retrieved as needed.If it turns out that there are good reasons for it to all be in memory,
then my question is whether there is a better approach that people
typically use in this situation. For now, I'm simply breaking up the
select into smaller chunks, but that approach won't be satisfactory in
the long run.Thanks
-Nick
-------------------------------------------------------------------------
- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax
1.765.962.9788 Ray Ontko & Co. Software Consulting Services
http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
This really is an artifact of the way that postgres gives us the data.
When you query the backend you get *all* of the results in the query,
and there is no indication of how many results you are going to get. In
simple selects it would be possible to get some idea by using
count(field), but this wouldn't work nearly enough times to make it
useful. So that leaves us with using cursors, which still won't tell you
how many rows you are getting back, but at least you won't have the
memory problems.
This approach is far from trivial which is why it hasn't been
implemented as of yet, keep in mind that result sets support things like
move(n), first(), last(), the last of which will be the trickiest. Not
to mention updateable result sets.
As it turns out there is a mechanism to get to the end move 0 in
'cursor', which currently is being considered a bug.
Dave
Show quoted text
On Fri, 2002-10-11 at 11:44, Doug Fields wrote:
At 08:27 AM 10/11/2002, snpe wrote:
Barry,
Is it true ?
I create table with one column varchar(500) and enter 1 milion rows with
length 10-20 character.JDBC query 'select * from a' get error 'out of
memory', but psql not.
I insert 8 milion rows and psql work fine yet (slow, but work)The way the code works in JDBC is, in my opinion, a little poor but
possibly mandated by JDBC design specs.It reads the entire result set from the database backend and caches it in a
horrible Vector (which should really be a List and which should at least
make an attempt to get the # of rows ahead of time to avoid all the
resizing problems).Then, it doles it out from memory as you go through the ResultSet with the
next() method.I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE
WHOLE THING - through the result set as each row is returned from the
backend, thus ensuring that you never use much more memory than one line.
EVEN IF you have to keep the connection locked.The latter is what I expected it to do. The former is what it does. So, it
necessitates you creating EVERY SELECT query which you think has more than
a few rows (or which you think COULD have more than a few rows, "few" being
defined by our VM memory limits) into a cursor based query. Really klugy. I
intend to write a class to do that for every SELECT query for me automatically.Cheers,
Doug
In C library is 'execute query' without fetch - in jdbc execute fetch all
rows
and this is problem - I think that executequery must prepare query and fetch
(ResultSet.next or ...) must fetch only fetchSize rows.
I am not sure, but I think that is problem with jdbc, not postgresql
Hackers ?
Does psql fetch all rows and if not how many ?
Can I change fetch size in psql ?
CURSOR , FETCH and MOVE isn't solution.
If I use jdbc in third-party IDE, I can't force this solutionregards
On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:
Nick,
This has been discussed before on this list many times. But the short
answer is that that is how the postgres server handles queries. If you
issue a query the server will return the entire result. (try the same
query in psql and you will have the same problem). To work around this
you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE
sql commands for postgres).thanks,
--BarryNick Fankhauser wrote:
I'm selecting a huge ResultSet from our database- about one million rows,
with one of the fields being varchar(500). I get an out of memory error
from java.If the whole ResultSet gets stashed in memory, this isn't really
surprising, but I'm wondering why this happens (if it does), rather than
a subset around the current record being cached and other rows being
retrieved as needed.If it turns out that there are good reasons for it to all be in memory,
then my question is whether there is a better approach that people
typically use in this situation. For now, I'm simply breaking up the
select into smaller chunks, but that approach won't be satisfactory in
the long run.Thanks
-Nick
-------------------------------------------------------------------------
- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax
1.765.962.9788 Ray Ontko & Co. Software Consulting Services
http://www.ontko.com/---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Currently there is a TODO list item to have move 0 not position to the
end of the cursor.
Moving to the end of the cursor is useful, can we keep the behaviour and
change it to move end, or just leave it the way it is?
Dave