How long will the query take
I ran a query out of pgAdmin, and (as I expected) it took a long
time. In fact, I did not let it finish. I stopped it after a little
over an hour.
I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.
My question is: is there a way to tell how close the query is to being
finished. It would be a great pity if the query would have finished
in the 10 seconds after I quit it, but I had no way of telling.
As a postscript, I would add that the query was undoubtedly too
ambitious. I have a reduced set version which I will run shortly.
But I am still curious to know if there is a way to tell how much time
is left.
Thanks,
John
In response to John Gage :
I ran a query out of pgAdmin, and (as I expected) it took a long
time. In fact, I did not let it finish. I stopped it after a little
over an hour.I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.
My question is: is there a way to tell how close the query is to being
finished. It would be a great pity if the query would have finished
in the 10 seconds after I quit it, but I had no way of telling.As a postscript, I would add that the query was undoubtedly too
ambitious. I have a reduced set version which I will run shortly.
But I am still curious to know if there is a way to tell how much time
is left.
No, not really. But you can (and should) run EXPLAIN <your query> to
obtain the execution plan for that query, und you can show us this plan
(and the table-definition for all included tables). Maybe someone is able
to tell you what you can do to speed up your query.
And yes, have you tuned your postgresql.conf?
Regards, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99
In response to "A. Kretschmer" <andreas.kretschmer@schollglas.com>:
In response to John Gage :
I ran a query out of pgAdmin, and (as I expected) it took a long
time. In fact, I did not let it finish. I stopped it after a little
over an hour.I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.
My question is: is there a way to tell how close the query is to being
finished. It would be a great pity if the query would have finished
in the 10 seconds after I quit it, but I had no way of telling.As a postscript, I would add that the query was undoubtedly too
ambitious. I have a reduced set version which I will run shortly.
But I am still curious to know if there is a way to tell how much time
is left.No, not really. But you can (and should) run EXPLAIN <your query> to
obtain the execution plan for that query, und you can show us this plan
(and the table-definition for all included tables). Maybe someone is able
to tell you what you can do to speed up your query.
To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.
Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
Bill Moran <wmoran@potentialtech.com> wrote:
No, not really. But you can (and should) run EXPLAIN <your query> to
obtain the execution plan for that query, und you can show us this plan
(and the table-definition for all included tables). Maybe someone is able
to tell you what you can do to speed up your query.To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.
As far as i know, EXPLAIN _can't_ say how long a query will take, it
returns only a COST, not a TIME.
Or can you tell me how long this query will be take?
test=# explain select * from foo;
QUERY PLAN
-------------------------------------------------------
Seq Scan on foo (cost=0.00..34.00 rows=2400 width=4)
(1 Zeile)
Okay, it's a really little table and a really simple plan ... but
imagine, i have a table with 100 millions rows and a) a slow disk and b)
a fast SSD.
You can't say how long the query will runs, even an estimate, okay?
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly." (unknown)
Kaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�
In response to Andreas Kretschmer <akretschmer@spamfence.net>:
Bill Moran <wmoran@potentialtech.com> wrote:
No, not really. But you can (and should) run EXPLAIN <your query> to
obtain the execution plan for that query, und you can show us this plan
(and the table-definition for all included tables). Maybe someone is able
to tell you what you can do to speed up your query.To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.As far as i know, EXPLAIN _can't_ say how long a query will take, it
returns only a COST, not a TIME.
Correct.
Or can you tell me how long this query will be take?
test=# explain select * from foo;
QUERY PLAN
-------------------------------------------------------
Seq Scan on foo (cost=0.00..34.00 rows=2400 width=4)
(1 Zeile)
EXPLAIN ANALYZE a few other queries of various complexity, and I'll be
able to translate that estimate to a time.
No, it's not 100% accurate, but (as I stated earlier) in my experience,
it gives you a pretty good idea.
Okay, it's a really little table and a really simple plan ... but
imagine, i have a table with 100 millions rows and a) a slow disk and b)
a fast SSD.
You're absolutely correct, and that's something that I should not have
omitted from my previous response. Translating the cost into a time
estimate is highly hardware-dependent, and not 100% accurate, so run
some tests to get an idea of what your cost -> time ratio is, and take
those cost estimates with a grain of salt.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
I will report back on this and attempt to give the particulars. It
will take 24 hours due to other time commitments.
Thank you very much for explaining :) this to me.
When I used only the first 10,000 rows of the 100+ thousand rows in
the original table (of two tables) I was working with, I got the
result I wanted in 10 minutes, which was really probably 80% of what I
wanted. Nevertheless, I do not want to fly blind in the future.
John
On Mar 29, 2010, at 7:10 PM, Bill Moran wrote:
Show quoted text
In response to Andreas Kretschmer <akretschmer@spamfence.net>:
Bill Moran <wmoran@potentialtech.com> wrote:
No, not really. But you can (and should) run EXPLAIN <your query>
to
obtain the execution plan for that query, und you can show us
this plan
(and the table-definition for all included tables). Maybe someone
is able
to tell you what you can do to speed up your query.To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.As far as i know, EXPLAIN _can't_ say how long a query will take, it
returns only a COST, not a TIME.Correct.
Or can you tell me how long this query will be take?
test=# explain select * from foo;
QUERY PLAN
-------------------------------------------------------
Seq Scan on foo (cost=0.00..34.00 rows=2400 width=4)
(1 Zeile)EXPLAIN ANALYZE a few other queries of various complexity, and I'll be
able to translate that estimate to a time.No, it's not 100% accurate, but (as I stated earlier) in my
experience,
it gives you a pretty good idea.Okay, it's a really little table and a really simple plan ... but
imagine, i have a table with 100 millions rows and a) a slow disk
and b)
a fast SSD.You're absolutely correct, and that's something that I should not have
omitted from my previous response. Translating the cost into a time
estimate is highly hardware-dependent, and not 100% accurate, so run
some tests to get an idea of what your cost -> time ratio is, and take
those cost estimates with a grain of salt.--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general