A Question About Insertions -- Performance

Started by Clay Lutherover 22 years ago5 messagesgeneral
Jump to latest
#1Clay Luther
claycle@cisco.com

I am doing to large dataset performance tests with 7.3.4b2 today and I noticed an interesting phenomenon. My shared memory buffers are set at 128MB. Peak postmaster usage appears to be around 90MB.

My test app performs inserts across 4 related tables, each set of 4 inserts representing a single theoretical "device" object. I report how many "devices" I have inserted, per second, for example...

[...]
41509 devices inserted, 36/sec
[1 second later]
41544 devices inserted, 35/sec
[...]

(to be clear, 41509 devices inserted equals 166036 actual, related rows in the db)

Performance follows an odd "peak and valley" pattern. It will start out with a high insertion rate (commits are performed after each "device set"), then after a few thousand device sets, performance will drop to 1 device/second for about 5 seconds. Then it will slowly ramp up over the next 10 seconds to /just below/ the previous high water mark. A few thousand inserts later, it will drop to 1 device/second again for 5 seconds, then slowly ramp up to just below the last high water mark.

Ad infinitum.

I am wondering:

1) What am I seeing here? This is on a 4-processor machine and postmaster has a CPU all to itself, so I ruled out processor contention.

2) Is there more performance tuning I could perform to flatten this out, or is this just completely normal? Postmaster never busts over 100MB out of the 128MB shared memory I've allocated to it, and according to <mumble mumble webpage mumble>, this is just about perfect for shared memory settings (100 to 120% high water mark).

Thanks.

---
Clay
Cisco Systems, Inc.
claycle@cisco.com
(972) 813-5004

I've stopped 19,647 spam messages. You can too!
One month FREE spam protection at http://www.cloudmark.com/spamnetsig/}

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Clay Luther (#1)
Re: A Question About Insertions -- Performance

"Clay Luther" <claycle@cisco.com> writes:

Performance follows an odd "peak and valley" pattern. It will start
out with a high insertion rate (commits are performed after each
"device set"), then after a few thousand device sets, performance will
drop to 1 device/second for about 5 seconds. Then it will slowly ramp
up over the next 10 seconds to /just below/ the previous high water
mark. A few thousand inserts later, it will drop to 1 device/second
again for 5 seconds, then slowly ramp up to just below the last high
water mark.

My best guess is that the dropoffs occur because of background checkpoint
operations, but there's not enough info here to prove it. Four inserts
per second seems horrendously slow in any case.

What are the table schemas (in particular, are there any foreign-key
constraints to check)?

Are you doing any vacuuming in this sequence? If so where?

What's the disk hardware like? Do you have WAL on its own disk drive?

regards, tom lane

PS: pgsql-performance would be a better list for this sort of issue.

#3Vivek Khera
khera@kcilink.com
In reply to: Clay Luther (#1)
Re: A Question About Insertions -- Performance

"TL" == Tom Lane <tgl@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

Another thing that *realy* picks up speed is to batch your inserts in
transactions. I just altered an application yesterday that had a loop
like this:

foreach row fetched from table c:
update table a where id=row.id
update table b where id2=row.id2
send notice to id
end

there were several such loops going on for distinct sets of rows in
the same tables.

changing it so that it was inside a transaction, and every 100 times
thru the loop to do a commit pretty much made the time it took to run
on a large loop from 2.5 hours down to 1 hour, and another that took 2
hours down to 40 minutes.

I had to put in a bunch of additional error checking and rollback
logic, but in the last two years none of those error conditions have
ever triggered so I think I'm pretty safe even with having to redo up
to 100 records on a transaction error (ie, it is unlikely to happen).

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: khera@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

#4Bruce Momjian
bruce@momjian.us
In reply to: Vivek Khera (#3)
Re: A Question About Insertions -- Performance

Vivek Khera wrote:

"TL" == Tom Lane <tgl@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

That warning message is only in 7.4.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#5Vivek Khera
khera@kcilink.com
In reply to: Bruce Momjian (#4)
Re: A Question About Insertions -- Performance

"BM" == Bruce Momjian <pgman@candle.pha.pa.us> writes:

Check your logs to see if you are checkpointing too frequently.

BM> That warning message is only in 7.4.

Yes, but the checkpoint activity is still logged. On my 7.2 system,
I'm checkpointing about every 1.5 minutes at peak with 3 checkpoint
segments. I think I can speed it up even more by increasing them.