PostgreSQL + IDS load/latency help

Started by Lokiabout 23 years ago5 messagesgeneral
Jump to latest
#1Loki
loki@fatelabs.com

Lists:

We have a PostgreSQL database full of over 500,000 events.
When the database has a small number of events, to
retrieve a specific event stored in that db takes a mere
few seconds.

However, as the database grew in size to where it is now,
it takes over 15-20 seconds to get information back from a
query to that database. We have tried everything,
including vacuum, which someone else recommended to me.

I noticed when running top(exec) on the system that the
PostgreSQL process eventually eats up ALL available RAM
(half a gig of RAM) and forces the system into scratch
space. When rebooting the machine the query time is
greatly improved back down to a reasonable query time,
(obviously because it hasnt eaten up all the memory yet.)

Am I correct in saying that 500,000 events shouldn't be a
problem? Is there anyone out there with more than 500,000
events in their DB than what we currently have?

Can anyone recommend anything to try or do to rectify the
situation. 15-20 seconds to get the information queried
from the database is unacceptable and can not work. Please
advise.

* We are running Snort 1.9.1 storing events to a custom
PostgreSQL database. Dumping the database and creating a
new one every so often is also not an option. Can anyone
provide assistance? Please advise.

For those needing more details on our setup, etc. I am not
currently able to log into the system. Please email me and
I can provide you more details offline.

Sincerely,

Eric Hines
Internet Warfare and Intelligence
Fate Research Labs
http://www.fatelabs.com

#2Gavin M. Roy
gmr@justsportsusa.com
In reply to: Loki (#1)
Re: PostgreSQL + IDS load/latency help

Have you used "explain" on your queries to make sure they are not using
seq scan? I've got tens of millions of records in my master database
and get 1 second response times with the right indexes created.

Gavin

Loki wrote:

Show quoted text

Lists:

We have a PostgreSQL database full of over 500,000 events. When the
database has a small number of events, to retrieve a specific event
stored in that db takes a mere few seconds.
However, as the database grew in size to where it is now, it takes
over 15-20 seconds to get information back from a query to that
database. We have tried everything, including vacuum, which someone
else recommended to me.

I noticed when running top(exec) on the system that the PostgreSQL
process eventually eats up ALL available RAM (half a gig of RAM) and
forces the system into scratch space. When rebooting the machine the
query time is greatly improved back down to a reasonable query time,
(obviously because it hasnt eaten up all the memory yet.)

Am I correct in saying that 500,000 events shouldn't be a problem? Is
there anyone out there with more than 500,000 events in their DB than
what we currently have?

Can anyone recommend anything to try or do to rectify the situation.
15-20 seconds to get the information queried from the database is
unacceptable and can not work. Please advise.

* We are running Snort 1.9.1 storing events to a custom PostgreSQL
database. Dumping the database and creating a new one every so often
is also not an option. Can anyone provide assistance? Please advise.

For those needing more details on our setup, etc. I am not currently
able to log into the system. Please email me and I can provide you
more details offline.

Sincerely,

Eric Hines
Internet Warfare and Intelligence
Fate Research Labs
http://www.fatelabs.com

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

#3Dennis Gearon
gearond@cvc.net
In reply to: Loki (#1)
Re: [GENERAL] PostgreSQL + IDS load/latency help

what do you mean by 'events'?

Loki wrote:

Show quoted text

Lists:

We have a PostgreSQL database full of over 500,000 events. When the
database has a small number of events, to retrieve a specific event
stored in that db takes a mere few seconds.
However, as the database grew in size to where it is now, it takes over
15-20 seconds to get information back from a query to that database. We
have tried everything, including vacuum, which someone else recommended
to me.

I noticed when running top(exec) on the system that the PostgreSQL
process eventually eats up ALL available RAM (half a gig of RAM) and
forces the system into scratch space. When rebooting the machine the
query time is greatly improved back down to a reasonable query time,
(obviously because it hasnt eaten up all the memory yet.)

Am I correct in saying that 500,000 events shouldn't be a problem? Is
there anyone out there with more than 500,000 events in their DB than
what we currently have?

Can anyone recommend anything to try or do to rectify the situation.
15-20 seconds to get the information queried from the database is
unacceptable and can not work. Please advise.

* We are running Snort 1.9.1 storing events to a custom PostgreSQL
database. Dumping the database and creating a new one every so often is
also not an option. Can anyone provide assistance? Please advise.

For those needing more details on our setup, etc. I am not currently
able to log into the system. Please email me and I can provide you more
details offline.

Sincerely,

Eric Hines
Internet Warfare and Intelligence
Fate Research Labs
http://www.fatelabs.com

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

#4philo vivero
phiviv@hacklab.net
In reply to: Loki (#1)
Re: PostgreSQL + IDS load/latency help

We have a PostgreSQL database full of over 500,000 events.
When the database has a small number of events, to
retrieve a specific event stored in that db takes a mere
few seconds.

However, as the database grew in size to where it is now,
it takes over 15-20 seconds to get information back from a
query to that database. We have tried everything,
including vacuum, which someone else recommended to me.

Sometimes it's the simple things. Did you do an explain on the query to
see its access plan? I see nothing in your post about indexes.

Am I correct in saying that 500,000 events shouldn't be a
problem? Is there anyone out there with more than 500,000
events in their DB than what we currently have?

With proper indexes, 500,000,000 "events" shouldn't be a problem.

--
pv
If this helped you, http://svcs.affero.net/rm.php?r=philovivero

#5Tim Ellis
pvspam-postgres@hacklab.net
In reply to: Loki (#1)
Re: PostgreSQL + IDS load/latency help

On Thu, 2003-03-27 at 07:12, Loki wrote:

We have a PostgreSQL database full of over 500,000 events.
When the database has a small number of events, to
retrieve a specific event stored in that db takes a mere
few seconds.

However, as the database grew in size to where it is now,
it takes over 15-20 seconds to get information back from a
query to that database. We have tried everything,
including vacuum, which someone else recommended to me.

Sometimes it's the simple things. Did you do an explain on the query to
see its access plan? I don't see anything in your post about indexes.

Am I correct in saying that 500,000 events shouldn't be a
problem? Is there anyone out there with more than 500,000
events in their DB than what we currently have?

PostgreSQL can give you speedy access into billions of rows of data
given proper indexing and maintenance. I'd like to see your query's
access plan.

--
Tim Ellis
Senior Database Architect and author, tedia2sql (http://tedia2sql.tigris.org)
If this helped you, http://svcs.affero.net/rm.php?r=philovivero