indexes are fucked
Hi all,
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:
siam_production=> \d render
Table
"public.render"
Column | Type |
Modifiers
----------------------+-----------------------------+--------------------------------------------------------
id | integer |
not null default nextval('public.render_id_seq'::text)
shot_id | integer |
not null
process | character(3) |
not null
person_id | integer |
not null
session_xml | text |
not null
guts_snapshot_id | integer |
layer | text |
render_path | text |
not null
frames | text |
not null
shot_index | integer |
not null
timestamp | timestamp without time zone |
not null default now()
layer_render_version | integer |
num_frames | integer |
mean_render_time | integer |
stdev_render_time | integer |
min_render_time | integer |
max_render_time | integer |
failed_frames | text |
swapped_frames | text |
killed_frames | text |
status | character varying(10) |
render_settings | text |
explicit_guts_log | text |
completed_frames | integer |
priority | character varying(3) |
render_host | character varying(10) |
Indexes: render_pkey primary key btree (id),
render_person_id_idx btree (person_id),
render_shot_id_idx btree (shot_id)
Foreign Key constraints: $3 FOREIGN KEY
(guts_snapshot_id) REFERENCES shot_snapshot(id) ON
UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE INITIALLY
DEFERRED,
$2 FOREIGN KEY (process)
REFERENCES process_enum(code) ON UPDATE CASCADE ON
DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED,
$1 FOREIGN KEY (shot_id)
REFERENCES shot(id) ON UPDATE CASCADE ON DELETE
RESTRICT DEFERRABLE INITIALLY DEFERRED
siam_production=> explain SELECT render.* FROM render
WHERE person_id = 432;
QUERY PLAN
-----------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493)
Filter: (person_id = 432)
(2 rows)
siam_production=>
As you can see, there is an index on render.person_id,
but postgres is using sequential scan. I have tried
*repeatedly* to reindex, analyze, drop & create index,
vacuum, etc. to no avail. What is wrong? I need this
fixed ASAP. It's killing the performance.
btw, the same thing would happen to
render_shot_id_idx, but after repeatedly doing
reindex, alanyze, vacuum, drop & create index, etc. it
suddenly started to work.
Eugene
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Dr NoName wrote:
Hi all,
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:
If you would like help, first try posting queries and EXPLAIN ANALYZE dumps.
Also you may have better luck with the Performance list
(pgsql-perform@postgresql.org).
Best Wishes,
Chris Travers
Metatron Technology Consulting
On Tue, 2005-08-02 at 12:04, Dr NoName wrote:
Hi all,
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:siam_production=> \d render
Table
"public.render"
Column | Type |
Modifiers
----------------------+-----------------------------+--------------------------------------------------------
id | integer |
not null default nextval('public.render_id_seq'::text)
shot_id | integer |
not null
process | character(3) |
not null
person_id | integer |
not null
session_xml | text |
not null
guts_snapshot_id | integer |
layer | text |
render_path | text |
not null
frames | text |
not null
shot_index | integer |
not null
timestamp | timestamp without time zone |
not null default now()
layer_render_version | integer |
num_frames | integer |
mean_render_time | integer |
stdev_render_time | integer |
min_render_time | integer |
max_render_time | integer |
failed_frames | text |
swapped_frames | text |
killed_frames | text |
status | character varying(10) |
render_settings | text |
explicit_guts_log | text |
completed_frames | integer |
priority | character varying(3) |
render_host | character varying(10) |
Indexes: render_pkey primary key btree (id),
render_person_id_idx btree (person_id),
render_shot_id_idx btree (shot_id)
Foreign Key constraints: $3 FOREIGN KEY
(guts_snapshot_id) REFERENCES shot_snapshot(id) ON
UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE INITIALLY
DEFERRED,
$2 FOREIGN KEY (process)
REFERENCES process_enum(code) ON UPDATE CASCADE ON
DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED,
$1 FOREIGN KEY (shot_id)
REFERENCES shot(id) ON UPDATE CASCADE ON DELETE
RESTRICT DEFERRABLE INITIALLY DEFERREDsiam_production=> explain SELECT render.* FROM render
WHERE person_id = 432;
QUERY PLAN
-----------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493)
Filter: (person_id = 432)
(2 rows)siam_production=>
As you can see, there is an index on render.person_id,
but postgres is using sequential scan. I have tried
*repeatedly* to reindex, analyze, drop & create index,
vacuum, etc. to no avail. What is wrong? I need this
fixed ASAP. It's killing the performance.btw, the same thing would happen to
render_shot_id_idx, but after repeatedly doing
reindex, alanyze, vacuum, drop & create index, etc. it
suddenly started to work.
1: Please refrain from the f word. There are some kids in schools (not
university) reading this list. there's really no need.
Please post the output of
explain analyze <yourqueryhere>
On Tue, 2005-08-02 at 10:04 -0700, Dr NoName wrote:
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:
[snip]
siam_production=> explain SELECT render.* FROM render
WHERE person_id = 432;
QUERY PLAN
-----------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493)
Filter: (person_id = 432)
An explain analyze would be more informative, with and without
seqscan enabled.
What proportion of rows have this particular value of person_id?
Maybe you need to increase statistics target of the column.
What is the output of these:
set enable_seqscan = off;
explain SELECT render.* FROM render WHERE person_id = 432;
set enable_seqscan = on;
explain SELECT render.* FROM render WHERE person_id = 432;
select count(*) from render;
select count(*) from render WHERE person_id = 432;
gnari
This is really the only thing I can think to suggest;
Have you tried 'SET enable_seqscan TO OFF;' and then tried the query
again? This happens to me now and then where an index is a lot faster
but the planner just doesn't want to use it. I've got an option in my
code to turn off 'enable_seqscan', perform the query, and turn in back
on for problem queries.
I'm still pretty new though so defer to anyone else's suggestions.
HTH
Madison
Dr NoName wrote:
Hi all,
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:siam_production=> \d render
Table
"public.render"
Column | Type |
Modifiers
----------------------+-----------------------------+--------------------------------------------------------
id | integer |
not null default nextval('public.render_id_seq'::text)
shot_id | integer |
not null
process | character(3) |
not null
person_id | integer |
not null
session_xml | text |
not null
guts_snapshot_id | integer |
layer | text |
render_path | text |
not null
frames | text |
not null
shot_index | integer |
not null
timestamp | timestamp without time zone |
not null default now()
layer_render_version | integer |
num_frames | integer |
mean_render_time | integer |
stdev_render_time | integer |
min_render_time | integer |
max_render_time | integer |
failed_frames | text |
swapped_frames | text |
killed_frames | text |
status | character varying(10) |
render_settings | text |
explicit_guts_log | text |
completed_frames | integer |
priority | character varying(3) |
render_host | character varying(10) |
Indexes: render_pkey primary key btree (id),
render_person_id_idx btree (person_id),
render_shot_id_idx btree (shot_id)
Foreign Key constraints: $3 FOREIGN KEY
(guts_snapshot_id) REFERENCES shot_snapshot(id) ON
UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE INITIALLY
DEFERRED,
$2 FOREIGN KEY (process)
REFERENCES process_enum(code) ON UPDATE CASCADE ON
DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED,
$1 FOREIGN KEY (shot_id)
REFERENCES shot(id) ON UPDATE CASCADE ON DELETE
RESTRICT DEFERRABLE INITIALLY DEFERREDsiam_production=> explain SELECT render.* FROM render
WHERE person_id = 432;
QUERY PLAN
-----------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493)
Filter: (person_id = 432)
(2 rows)siam_production=>
As you can see, there is an index on render.person_id,
but postgres is using sequential scan. I have tried
*repeatedly* to reindex, analyze, drop & create index,
vacuum, etc. to no avail. What is wrong? I need this
fixed ASAP. It's killing the performance.btw, the same thing would happen to
render_shot_id_idx, but after repeatedly doing
reindex, alanyze, vacuum, drop & create index, etc. it
suddenly started to work.Eugene
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly
TLE-BU, The Linux Experience; Back Up
http://tle-bu.org
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
What is the output of these:
set enable_seqscan = off;
explain SELECT render.* FROM render WHERE person_id
= 432;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using render_person_id_idx on render
(cost=0.00..108735.88 rows=27833 width=1493) (actual
time=0.11..77.62 rows=5261 loops=1)
Index Cond: (person_id = 432)
Total runtime: 80.99 msec
(3 rows)
so it will use the index if I disable seq scan? wtf?
set enable_seqscan = on;
explain SELECT render.* FROM render WHERE person_id
= 432;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493) (actual time=7.11..743.55 rows=5261
loops=1)
Filter: (person_id = 432)
Total runtime: 747.42 msec
(3 rows)
select count(*) from render;
count
--------
236612
(1 row)
select count(*) from render WHERE person_id = 432;
count
-------
5261
(1 row)
thanks,
Eugene
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
siam_production=> explain analyze select * from render
where person_id = 432;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493) (actual time=7.11..743.55 rows=5261
loops=1)
Filter: (person_id = 432)
Total runtime: 747.42 msec
(3 rows)
thanks,
Eugene
--- Scott Marlowe <smarlowe@g2switchworks.com> wrote:
On Tue, 2005-08-02 at 12:04, Dr NoName wrote:
Hi all,
I got another problem with postgres. This time it
refuses to use the indexes. Check this out:siam_production=> \d render
Table
"public.render"
Column | Type|
Modifiers
----------------------+-----------------------------+--------------------------------------------------------
id | integer
|
not null default
nextval('public.render_id_seq'::text)
shot_id | integer
|
not null
process | character(3)|
not null
person_id | integer|
not null
session_xml | text|
not null
guts_snapshot_id | integer|
layer | text
|
render_path | text
|
not null
frames | text|
not null
shot_index | integer|
not null
timestamp | timestamp without timezone |
not null default now()
layer_render_version | integer|
num_frames | integer
|
mean_render_time | integer
|
stdev_render_time | integer
|
min_render_time | integer
|
max_render_time | integer
|
failed_frames | text
|
swapped_frames | text
|
killed_frames | text
|
status | character varying(10)
|
render_settings | text
|
explicit_guts_log | text
|
completed_frames | integer
|
priority | character varying(3)
|
render_host | character varying(10)
|
Indexes: render_pkey primary key btree (id),
render_person_id_idx btree (person_id),
render_shot_id_idx btree (shot_id)
Foreign Key constraints: $3 FOREIGN KEY
(guts_snapshot_id) REFERENCES shot_snapshot(id) ON
UPDATE CASCADE ON DELETE RESTRICT DEFERRABLEINITIALLY
DEFERRED,
$2 FOREIGN KEY (process)
REFERENCES process_enum(code) ON UPDATE CASCADE ON
DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED,
$1 FOREIGN KEY (shot_id)
REFERENCES shot(id) ON UPDATE CASCADE ON DELETE
RESTRICT DEFERRABLE INITIALLY DEFERREDsiam_production=> explain SELECT render.* FROM
render
WHERE person_id = 432;
QUERY PLAN
-----------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72
rows=27833
width=1493)
Filter: (person_id = 432)
(2 rows)siam_production=>
As you can see, there is an index on
render.person_id,
but postgres is using sequential scan. I have
tried
*repeatedly* to reindex, analyze, drop & create
index,
vacuum, etc. to no avail. What is wrong? I need
this
fixed ASAP. It's killing the performance.
btw, the same thing would happen to
render_shot_id_idx, but after repeatedly doing
reindex, alanyze, vacuum, drop & create index,etc. it
suddenly started to work.
1: Please refrain from the f word. There are some
kids in schools (not
university) reading this list. there's really no
need.Please post the output of
explain analyze <yourqueryhere>
---------------------------(end of
broadcast)---------------------------
TIP 5: don't forget to increase your free space map
settings
____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
On Tue, 2005-08-02 at 13:05, Dr NoName wrote:
siam_production=> explain analyze select * from render
where person_id = 432;QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493) (actual time=7.11..743.55 rows=5261
loops=1)
Filter: (person_id = 432)
Total runtime: 747.42 msec
(3 rows)
Notice the disparity here? The query planner thinks that there's gonna
be 27833 rows returned, but there's only really 5261 being returned.
When's the last time you analyzed this table? And have you considered
running the pg_autovacuum daemon, which will vacuum and analyze for you
in the back ground?
On Tue, 2005-08-02 at 10:50 -0700, Dr NoName wrote:
What is the output of these:
set enable_seqscan = off;
explain SELECT render.* FROM render WHERE person_id
= 432;QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using render_person_id_idx on render
(cost=0.00..108735.88 rows=27833 width=1493) (actual
time=0.11..77.62 rows=5261 loops=1)
Index Cond: (person_id = 432)
Total runtime: 80.99 msec
(3 rows)so it will use the index if I disable seq scan? wtf?
Setting enable_seqscan to off artificially adds a
high fake cost factor to seqscans, so the planner will
not use them, unless there is no alternative.
This usually should not be done in production, but
can be useful for debugging.
Here we see that the planner estimated 27833 rows,
but actually only 5261 rows were retrieved.
Based on the high number of rows, a cost of 108735
was estimated.
set enable_seqscan = on;
explain SELECT render.* FROM render WHERE person_id
= 432;QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Seq Scan on render (cost=0.00..39014.72 rows=27833
width=1493) (actual time=7.11..743.55 rows=5261
loops=1)
Filter: (person_id = 432)
Total runtime: 747.42 msec
(3 rows)
the seqscan is cheaper when a large enough proportion
(maybe 5%) of rows are retrieved, and indeed the cost
is estimated at 39014
try to increase statistics for this column:
ALTER TABLE render ALTER COLUMN person_id
SET STATISTICS 1000;
ANALYZE render;
1000 is the maximum value, and probably overkill,
but you might start with that. If this helps, you can
try to lower values until you find the lowest one
that still suits your data. Usually, 200 is enough.
gnari
the seqscan is cheaper when a large enough
proportion
(maybe 5%) of rows are retrieved, and indeed the
cost
is estimated at 39014
very good explanation. thank you.
try to increase statistics for this column:
ALTER TABLE render ALTER COLUMN person_id
SET STATISTICS 1000;
ANALYZE render;1000 is the maximum value, and probably overkill,
but you might start with that. If this helps, you
can
try to lower values until you find the lowest one
that still suits your data. Usually, 200 is enough.
ok, we're getting into voodoo territory. What is this
"statistics"? How can I see what the current value is?
How can I estimate what's a "good" value? Where can I
find more information about this?
thanks,
Eugene
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
When's the last time you analyzed this table? And
a few hours before I posted this. vacuumdb --analyze
also runs every night.
have you considered
running the pg_autovacuum daemon, which will vacuum
and analyze for you
in the back ground?
We are using postgresql 7.3.2 which doesn't have
autovacuum.
Eugene
__________________________________
Yahoo! Mail for Mobile
Take Yahoo! Mail with you! Check email on your mobile phone.
http://mobile.yahoo.com/learn/mail
On Tue, 2005-08-02 at 13:26, Dr NoName wrote:
When's the last time you analyzed this table? And
a few hours before I posted this. vacuumdb --analyze
also runs every night.
Were there a lot of updates / deletes between when you ran analyze and
when you ran this query? If so, you might want to schedule more
frequent analyze runs and / or include them in whatever script is doing
the udpating / deleting.
Also, you might want to look at tuning your database. I've found that
on machines that can cache most of their data sets, adjusting things
like effective_cache_size and random_page_cost makes a big difference.
have you considered
running the pg_autovacuum daemon, which will vacuum
and analyze for you
in the back ground?We are using postgresql 7.3.2 which doesn't have
autovacuum.
Sad. I think you can use the pg_autovacuum from 7.4 on 7.3 though I've
not tried it.
Also, 7.3.2 it QUITE out of date. you should, at a minimum, be running
7.3.10. It's a straigh ahead, in place upgrade, and I'm certain there
were bug fixes from 7.3.2 to 7.3.10 that you wouldn't want to run
without.
You might want to schedule analyzes to run every thirty minutes or every
hour.
On Tue, Aug 02, 2005 at 01:41:48PM -0500, Scott Marlowe wrote:
Also, you might want to look at tuning your database. I've found that
on machines that can cache most of their data sets, adjusting things
like effective_cache_size and random_page_cost makes a big difference.
Also, as Ragnar Hafsta� suggested, consider increasing the statistics
target on the column in question. The row estimate (27833) was
over five times higher than the actual number of rows (5261),
resulting in an overinflated cost estimate for an index scan.
Increasing the statistics should help the planner make a more
accurate estimate. Here are some useful links:
http://www.postgresql.org/docs/7.3/static/performance-tips.html#USING-EXPLAIN
http://www.postgresql.org/docs/7.3/static/planner-stats.html
http://developer.postgresql.org/docs/postgres/planner-stats-details.html
The last link will be in the documentation for 8.1 when it's released,
but I think it largely applies to earlier versions as well.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
Thanks Ragnar. That solved the problem. I still would
like some explanation about this voodoo. Most
importantly, how can I estimage a "good" statistics
number?
thanks,
Eugene
--- Dr NoName <spamacct11@yahoo.com> wrote:
the seqscan is cheaper when a large enough
proportion
(maybe 5%) of rows are retrieved, and indeed the
cost
is estimated at 39014very good explanation. thank you.
try to increase statistics for this column:
ALTER TABLE render ALTER COLUMN person_id
SET STATISTICS 1000;
ANALYZE render;1000 is the maximum value, and probably overkill,
but you might start with that. If this helps, you
can
try to lower values until you find the lowest one
that still suits your data. Usually, 200 isenough.
ok, we're getting into voodoo territory. What is
this
"statistics"? How can I see what the current value
is?
How can I estimate what's a "good" value? Where can
I
find more information about this?thanks,
Eugene
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam
protection around
http://mail.yahoo.com---------------------------(end of
broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will
ignore your desire to
choose an index scan if your joining column's
datatypes do not
match
____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
On 8/2/05, Dr NoName <spamacct11@yahoo.com> wrote:
Thanks Ragnar. That solved the problem. I still would
like some explanation about this voodoo. Most
importantly, how can I estimage a "good" statistics
number?thanks,
Eugene
http://www.postgresql.org/docs/8.0/static/planner-stats.html
--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)
Language reflects the character of those who use it.
In this case, the language used also reflects on the community to which it
is directed.
I don't wish to be associated with a "professional" community, such as
Postgre, that holds such shallow values.
Bob Pawley
----- Original Message -----
From: "Jaime Casanova" <systemguards@gmail.com>
To: "Dr NoName" <spamacct11@yahoo.com>
Cc: "Ragnar Hafsta�" <gnari@simnet.is>; <pgsql-general@postgresql.org>
Sent: Tuesday, August 02, 2005 12:57 PM
Subject: Re: [GENERAL] indexes are fucked
On 8/2/05, Dr NoName <spamacct11@yahoo.com> wrote:
Thanks Ragnar. That solved the problem. I still would
like some explanation about this voodoo. Most
importantly, how can I estimage a "good" statistics
number?thanks,
Eugene
http://www.postgresql.org/docs/8.0/static/planner-stats.html
--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Bob Pawley wrote:
Language reflects the character of those who use it.
In this case, the language used also reflects on the community to which
it is directed.I don't wish to be associated with a "professional" community, such as
Postgre, that holds such shallow values.
Well since there is no Postgre community I guess this isn't a problem...
I assume you are speaking to the colorful four letter word in the
subject. It would not have been my choice either and in general if you
review the lists you will see that swearing is rare.
However if you are not going to associate with a community because one
person chose to use a word you don't like then you are going to lead a
very lonely life.
Have you ever grepped for four letter words in Linux?
We are an Open Source community, we embrace many values. Some people
value things a little differently than others.
Sincerely,
Joshua D. Drake
Bob Pawley
----- Original Message ----- From: "Jaime Casanova"
<systemguards@gmail.com>
To: "Dr NoName" <spamacct11@yahoo.com>
Cc: "Ragnar Hafsta�" <gnari@simnet.is>; <pgsql-general@postgresql.org>
Sent: Tuesday, August 02, 2005 12:57 PM
Subject: Re: [GENERAL] indexes are fuckedOn 8/2/05, Dr NoName <spamacct11@yahoo.com> wrote:
Thanks Ragnar. That solved the problem. I still would
like some explanation about this voodoo. Most
importantly, how can I estimage a "good" statistics
number?thanks,
Eugene
http://www.postgresql.org/docs/8.0/static/planner-stats.html
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
On Tue, 2005-08-02 at 15:52, Joshua D. Drake wrote:
Bob Pawley wrote:
Language reflects the character of those who use it.
In this case, the language used also reflects on the community to which
it is directed.I don't wish to be associated with a "professional" community, such as
Postgre, that holds such shallow values.Well since there is no Postgre community I guess this isn't a problem...
I assume you are speaking to the colorful four letter word in the
subject. It would not have been my choice either and in general if you
review the lists you will see that swearing is rare.However if you are not going to associate with a community because one
person chose to use a word you don't like then you are going to lead a
very lonely life.Have you ever grepped for four letter words in Linux?
We are an Open Source community, we embrace many values. Some people
value things a little differently than others.Sincerely,
Joshua D. Drake
What I found most interesting about his reply was that he couldn't be
bothered to take 5 whole seconds to change the very subject line he was
bitching about.
BINGO. Noticed that, too.
Guess all the energy was used getting up on the high horse.
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Scott Marlowe
Sent: Tuesday, August 02, 2005 5:09 PM
To: Joshua D. Drake
Cc: Bob Pawley; Jaime Casanova; Dr NoName; Ragnar Hafstað; pgsql-general@postgresql.org
Subject: Re: [GENERAL] indexes are farked
On Tue, 2005-08-02 at 15:52, Joshua D. Drake wrote:
Bob Pawley wrote:
Language reflects the character of those who use it.
In this case, the language used also reflects on the community to
which
it is directed.I don't wish to be associated with a "professional" community, such
as
Postgre, that holds such shallow values.Well since there is no Postgre community I guess this isn't a
problem...I assume you are speaking to the colorful four letter word in the
subject. It would not have been my choice either and in general if you
review the lists you will see that swearing is rare.However if you are not going to associate with a community because one
person chose to use a word you don't like then you are going to lead a
very lonely life.Have you ever grepped for four letter words in Linux?
We are an Open Source community, we embrace many values. Some people
value things a little differently than others.Sincerely,
Joshua D. Drake
What I found most interesting about his reply was that he couldn't be bothered to take 5 whole seconds to change the very subject line he was bitching about.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Import Notes
Resolved by subject fallback
Scott Marlowe wrote:
You might want to schedule analyzes to run every thirty minutes or every
hour.
I doubt that is necessary or wise. Rerunning ANALYZE should only be
necessary when the distribution of your data changes significantly --
e.g. after a bulk load or deletion of a lot of content. IMHO In most
circumstances, running ANALYZE once a day is more than sufficient.
-Neil