Connections per second?
Hi,
I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres database and writes a log to a file based on the result. Any idea how many hits a second I can get to before things start crashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can handle 2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent connections? Are there any examples or old threads on writing a similar program in C with libpq?
Thanks,
Ale
--
Alejandro Fernandez
Electronic Group Interactive
--+34-65-232-8086--
Alejandro Fernandez <ale@e-group.org> writes:
Hi,
I'm writing a small but must-be-fast cgi program that for each hit
it gets, it reads an indexed table in a postgres database and writes
a log to a file based on the result. Any idea how many hits a second
I can get to before things start crashing, or queuing up too much,
etc? And will postgres be one of the first to fall? Do any of you
think it can handle 2000 hits a second (what I think I could get at
peak times) - and what would it need to do so? Persistent
connections? Are there any examples or old threads on writing a
similar program in C with libpq?
Doing it as CGI is going to have two big performance penalties:
1) Kernel and system overhead for starting of a new process per hit,
plus interpreter startup if you're using a scripting language
2) Overhead in Postgres for creating a database connection from scratch
Doing it in C will only eliminate the interpreter startup.
You really want a non-CGI solution (such as mod_perl) and you really
want persistent connections (Apache::DBI is one solution that works
with mod_perl). Java servlets with a connection pooling library would
also work.
-Doug
Import Notes
Reply to msg id not found: AlejandroFernandez'smessageofTue23Apr2002171230+0200
Try
http://www.sai.msu.su/~megera/postgres/pg-bench.pl
(change dbname, first).
Here is data for my notebook (IBM ThinkPad T21, 256 MB RAM, Postgresql 7.2.1)
Testing empty loop speed ...
100000 iterations in 0.1 cpu+sys seconds (833333 per sec)
Testing connect/disconnect speed ...
2000 connections in 2.6 cpu+sys seconds (754 per sec)
Testing CREATE/DROP TABLE speed ...
1000 files in 0.7 cpu+sys seconds (1369 per sec)
Testing INSERT speed ...
500 rows in 0.2 cpu+sys seconds (2272 per sec)
Testing UPDATE speed ...
500 rows in 0.2 cpu+sys seconds (2272 per sec)
Testing SELECT speed ...
100 single rows in 0.1 cpu+sys seconds (1428.6 per sec)
Testing SELECT speed (multiple rows) ...
100 times 100 rows in 0.1 cpu+sys seconds (714.3 per sec)
I'd recommend to use persistent connection for real-life web applications.
Oleg
On Tue, 23 Apr 2002, Alejandro Fernandez wrote:
Hi,
I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres database and writes a log to a file based on the result. Any idea how many hits a second I can get to before things start crashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can handle 2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent connections? Are there any examples or old threads on writing a similar program in C with libpq?
Thanks,
Ale
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
Depending on the size of the table and how much RAM you have, and on your OS,
you may find the entire table cached in RAM, which would be ideal. You should
use persistent connections if at all possible. You didn't mention what web
server you're using, but if you're using Apache, you may want to write an Apache
module that will maintain a persistent connection for each apache child process;
that will also keep your program loaded in memory so it doesn't have to be
reloaded on each request. I would also be concerned about write speed to the log
file; not sure where that will peak.
Hope this helps,
Wes Sheldahl
Alejandro Fernandez <ale%electronic-group.com@interlock.lexmark.com> on
04/23/2002 10:51:40 AM
To: pgsql-general%postgresql.org@interlock.lexmark.com
cc: (bcc: Wesley Sheldahl/Lex/Lexmark)
Subject: [GENERAL] Connections per second?
Hi,
I'm writing a small but must-be-fast cgi program that for each hit it gets, it
reads an indexed table in a postgres database and writes a log to a file based
on the result. Any idea how many hits a second I can get to before things start
crashing, or queuing up too much, etc? And will postgres be one of the first to
fall? Do any of you think it can handle 2000 hits a second (what I think I could
get at peak times) - and what would it need to do so? Persistent connections?
Are there any examples or old threads on writing a similar program in C with
libpq?
Thanks,
Ale
--
Alejandro Fernandez
Electronic Group Interactive
--+34-65-232-8086--
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
Import Notes
Resolved by subject fallback
And aolserver is another. It will let you write modules in C so you don't have to take the TCL interpreter hit, small though it is.
- Ian
Doug McNaught <doug@wireboard.com> 04/23/02 09:16AM >>>
Alejandro Fernandez <ale@e-group.org> writes:
Hi,
I'm writing a small but must-be-fast cgi program that for each hit
it gets, it reads an indexed table in a postgres database and writes
a log to a file based on the result. Any idea how many hits a second
I can get to before things start crashing, or queuing up too much,
etc? And will postgres be one of the first to fall? Do any of you
think it can handle 2000 hits a second (what I think I could get at
peak times) - and what would it need to do so? Persistent
connections? Are there any examples or old threads on writing a
similar program in C with libpq?
Doing it as CGI is going to have two big performance penalties:
1) Kernel and system overhead for starting of a new process per hit,
plus interpreter startup if you're using a scripting language
2) Overhead in Postgres for creating a database connection from scratch
Doing it in C will only eliminate the interpreter startup.
You really want a non-CGI solution (such as mod_perl) and you really
want persistent connections (Apache::DBI is one solution that works
with mod_perl). Java servlets with a connection pooling library would
also work.
-Doug
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Import Notes
Resolved by subject fallback
Hi,
Thanks for your replies.
I think persistence is what I'm looking for then! Would you have any pointers on how to inform myself on getting connections to be persistent? Yes, it is apache. Write speed, well, I think I'd need some pretty fast hard drives... Maybe IDE drives, with raid 0...
So:
1)How do I get connections to postgresql be persistent?
2)How do I get all the other connections to be persistent too?? I would welcome a url or something that might help! (perhaps off list if it's off topic!)
Thanks,
Ale
On Tue, 23 Apr 2002 12:26:46 -0400
wsheldah@lexmark.com wrote:
Show quoted text
server you're using, but if you're using Apache, you may want to write an Apache
module that will maintain a persistent connection for each apache child process;
Alejandro Fernandez <ale@e-group.org> writes:
I think persistence is what I'm looking for then! Would you have any
pointers on how to inform myself on getting connections to be
persistent? Yes, it is apache. Write speed, well, I think I'd need
some pretty fast hard drives... Maybe IDE drives, with raid 0...
I'm pretty sure Apache::DBI does this, in conjunction with mod_perl.
-Doug
Import Notes
Reply to msg id not found: AlejandroFernandez'smessageofWed24Apr2002114511+0200
I think it's simply impossible to have a persistent connection with CGI since the program is called and exited for each HTTP request (or am I wrong ?).
The only way to do that is either to develop an Apache module (sounds like reinventing the wheel to me), or using mod_perl or mod_php and the simple "ready to use" interfaces they provide.
In fact, it depends on how heavy your "must be fast" program will be to decide wether making it work as a CGI will introduce a big overhead relatively to the execution time or not. The longer the execution time will be, the more the CGI way will tend not to reduce performance.
That's my point of view, hope it helps
Arnaud
----- Original Message -----
From: "Alejandro Fernandez" <ale@e-group.org>
To: <pgsql-general@postgresql.org>
Sent: Tuesday, April 23, 2002 5:12 PM
Subject: [GENERAL] Connections per second?
Hi,
I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres database and writes a log to a file based on the result. Any idea how many hits a second I can get to before things start crashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can handle 2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent connections? Are there any examples or old threads on writing a similar program in C with libpq?
Thanks,
Ale
--
Alejandro Fernandez
Electronic Group Interactive
--+34-65-232-8086--
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly