Postgres insert performance and storage requirement compared to Oracle
Hello Experts,
My application uses Oracle DB, and makes use of OCI interface.
I have been able to develop similar interface using postgreSQL library.
However, I have done some tests but results for PostgreSQL have not been
encouraging for a few of them.
My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed (type: btree)
table? Is it true that as you add the indexes on a table, the performance
deteriorates significantly whereas Oracle does not show that much performance
decrease. I have tried almost all postgreSQL performance tips available. I want
to have very good "insert" performance (with indexes), "select" performance is
not that important at this point of time.
2. What are the average storage requirements of postgres compared to Oracle? I
inserted upto 1 million records. The storage requirement of postgreSQL is almost
double than that of Oracle.
Thanks in anticipation.
On Mon, 2010-10-25 at 11:12 -0700, Divakar Singh wrote:
My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed
(type: btree)
table? Is it true that as you add the indexes on a table, the
performance
deteriorates significantly whereas Oracle does not show that much
performance
decrease. I have tried almost all postgreSQL performance tips
available. I want
to have very good "insert" performance (with indexes), "select"
performance is
not that important at this point of time.
Did you test?
2. What are the average storage requirements of postgres compared to
Oracle? I
inserted upto 1 million records. The storage requirement of postgreSQL
is almost
double than that of Oracle.
What was your table structure?
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
On Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <dpsmails@yahoo.com> wrote:
Hello Experts,
My application uses Oracle DB, and makes use of OCI interface.
I have been able to develop similar interface using postgreSQL library.
However, I have done some tests but results for PostgreSQL have not been
encouraging for a few of them.
Tell us more about your tests and results please.
My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed
(type: btree)
table? Is it true that as you add the indexes on a table, the
performance
deteriorates significantly whereas Oracle does not show that much
performance
decrease. I have tried almost all postgreSQL performance tips
available. I want
to have very good "insert" performance (with indexes), "select"
performance is
not that important at this point of time.
-- Did you test?
Yes. the performance was comparable when using SQL procedure. However, When I
used libpq, PostgreSQL performed very bad. There was some difference in
environment also between these 2 tests, but I am assuming libpq vs SQL was the
real cause. Or it was something else?
2. What are the average storage requirements of postgres compared to
Oracle? I
inserted upto 1 million records. The storage requirement of postgreSQL
is almost
double than that of Oracle.
-- What was your table structure?
Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5 indexes on
varchar and int fields including 1 implicit index coz of PK.
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
On Mon, Oct 25, 2010 at 2:12 PM, Divakar Singh <dpsmails@yahoo.com> wrote:
1. How does PostgreSQL perform when inserting data into an indexed (type:
btree) table? Is it true that as you add the indexes on a table, the
performance deteriorates significantly whereas Oracle does not show that
much performance decrease. I have tried almost all postgreSQL performance
tips available. I want to have very good "insert" performance (with
indexes), "select" performance is not that important at this point of time.
I don't claim to have any experience with Oracle, but this boast
smells fishy. See for example Figure 3-2 (pp. 57-58) in "The Art of
SQL", where the author presents simple charts showing the performance
impact upon INSERTs of adding indexes to a table in Oracle and MySQL:
they're both in the same ballpark, and the performance impact is
indeed significant. As Joshua Drake suggests, table schemas and test
results would help your case.
Josh
Storage test was simple, but the data (seconds taken) for INSERT test for PG vs
Oracle for 1, 2, 3,4 and 5 indexes was:
PG:
25
30
37
42
45
Oracle:
33
43
50
65
68 Rows inserted: 100,000
Above results show good INSERT performance of PG when using SQL procedures. But
performance when I use C++ lib is very bad. I did that test some time back so I
do not have data for that right now.
________________________________
From: Scott Marlowe <scott.marlowe@gmail.com>
To: Divakar Singh <dpsmails@yahoo.com>
Cc: pgsql-performance@postgresql.org
Sent: Mon, October 25, 2010 11:56:27 PM
Subject: Re: [PERFORM] Postgres insert performance and storage requirement
compared to Oracle
On Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <dpsmails@yahoo.com> wrote:
Hello Experts,
My application uses Oracle DB, and makes use of OCI interface.
I have been able to develop similar interface using postgreSQL library.
However, I have done some tests but results for PostgreSQL have not been
encouraging for a few of them.
Tell us more about your tests and results please.
On Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:
68 Rows inserted: 100,000
Above results show good INSERT performance of PG when using SQL
procedures. But
performance when I use C++ lib is very bad. I did that test some time
back so I
do not have data for that right now.
This is interesting, are you using libpq or libpqXX?
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
Hi Joshua,
I have been only using libpq.
Is libpqXX better than the other?
Is there any notable facility in libpqxx which could aid in fast inserts or
better performance in general?
Best Regards,
Divakar
________________________________
From: Joshua D. Drake <jd@commandprompt.com>
To: Divakar Singh <dpsmails@yahoo.com>
Cc: Scott Marlowe <scott.marlowe@gmail.com>; pgsql-performance@postgresql.org
Sent: Tue, October 26, 2010 12:08:52 AM
Subject: Re: [PERFORM] Postgres insert performance and storage requirement
compared to Oracle
On Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:
68 Rows inserted: 100,000
Above results show good INSERT performance of PG when using SQL
procedures. But
performance when I use C++ lib is very bad. I did that test some time
back so I
do not have data for that right now.
This is interesting, are you using libpq or libpqXX?
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 10-10-25 02:31 PM, Divakar Singh wrote:
My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed
(type: btree)
table? Is it true that as you add the indexes on a table, the
performance
deteriorates significantly whereas Oracle does not show that much
performance
decrease. I have tried almost all postgreSQL performance tips
available. I want
to have very good "insert" performance (with indexes), "select"
performance is
not that important at this point of time.-- Did you test?
Yes. the performance was comparable when using SQL procedure. However,
When I used libpq, PostgreSQL performed very bad. There was some
difference in environment also between these 2 tests, but I am assuming
libpq vs SQL was the real cause. Or it was something else?
So your saying that when you load the data with psql it loads fine, but
when you load it using libpq it takes much longer?
How are you using libpq?
-Are you opening and closing the database connection between each insert?
-Are you doing all of your inserts as one big transaction or are you
doing a transaction per insert
-Are you using prepared statements for your inserts?
-Are you using the COPY command to load your data or the INSERT command?
-Are you running your libpq program on the same server as postgresql?
-How is your libpq program connecting to postgresql, is it using ssl?
Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5
indexes on varchar and int fields including 1 implicit index coz of PK.
If your run "VACUUM VERBOSE tablename" on the table, what does it say?
You also don't mention which version of postgresql your using.
Show quoted text
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
Answers:
How are you using libpq?
-Are you opening and closing the database connection between each insert?
[Need to check, will come back on this]
-Are you doing all of your inserts as one big transaction or are you doing a
transaction per insert
[Answer: for C++ program, one insert per transaction in PG as well as Oracle.
But in stored proc, I think both use only 1 transaction for all inserts]
-Are you using prepared statements for your inserts?
[Need to check, will come back on this]
-Are you using the COPY command to load your data or the INSERT command?
[No]
-Are you running your libpq program on the same server as postgresql?
[Yes]
-How is your libpq program connecting to postgresql, is it using ssl?
[No]
If your run "VACUUM VERBOSE tablename" on the table, what does it say?
[Need to check, will come back on this]
You also don't mention which version of postgresql your using.
[Latest, 9.x]
Best Regards,
Divakar
________________________________
From: Steve Singer <ssinger@ca.afilias.info>
To: Divakar Singh <dpsmails@yahoo.com>
Cc: jd@commandprompt.com; pgsql-performance@postgresql.org
Sent: Tue, October 26, 2010 12:16:46 AM
Subject: Re: [PERFORM] Postgres insert performance and storage requirement
compared to Oracle
On 10-10-25 02:31 PM, Divakar Singh wrote:
My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed
(type: btree)
table? Is it true that as you add the indexes on a table, the
performance
deteriorates significantly whereas Oracle does not show that much
performance
decrease. I have tried almost all postgreSQL performance tips
available. I want
to have very good "insert" performance (with indexes), "select"
performance is
not that important at this point of time.-- Did you test?
Yes. the performance was comparable when using SQL procedure. However,
When I used libpq, PostgreSQL performed very bad. There was some
difference in environment also between these 2 tests, but I am assuming
libpq vs SQL was the real cause. Or it was something else?
So your saying that when you load the data with psql it loads fine, but
when you load it using libpq it takes much longer?
How are you using libpq?
-Are you opening and closing the database connection between each insert?
-Are you doing all of your inserts as one big transaction or are you
doing a transaction per insert
-Are you using prepared statements for your inserts?
-Are you using the COPY command to load your data or the INSERT command?
-Are you running your libpq program on the same server as postgresql?
-How is your libpq program connecting to postgresql, is it using ssl?
Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5
indexes on varchar and int fields including 1 implicit index coz of PK.
If your run "VACUUM VERBOSE tablename" on the table, what does it say?
You also don't mention which version of postgresql your using.
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Profiling could tell you where is the time lost and where is your
program spending time. Having experience with both Oracle and Postgres,
I don't feel that there is much of a difference in the insert speed. I
am not using C++, I am using scripting languages like Perl and PHP and,
as far as inserts go, I don't see much of a difference. I have an
application which inserts approximately 600,000 records into a
PostgreSQL 9.0.1 per day, in chunks of up to 60,000 records every hour.
The table is partitioned and there are indexes on the underlying
partitions. I haven't noticed any problems with inserts. Also, if I use
"copy" instead of the "insert" command, I can be even faster. In
addition to that, PostgreSQL doesn't support index organized tables.
Divakar Singh wrote:
Storage test was simple, but the data (seconds taken) for INSERT test
for PG vs Oracle for 1, 2, 3,4 and 5 indexes was:
PG:
25
30
37
42
45Oracle:
33
43
50
65
68Rows inserted: 100,000
Above results show good INSERT performance of PG when using SQL
procedures. But performance when I use C++ lib is very bad. I did that
test some time back so I do not have data for that right now.------------------------------------------------------------------------
*From:* Scott Marlowe <scott.marlowe@gmail.com>
*To:* Divakar Singh <dpsmails@yahoo.com>
*Cc:* pgsql-performance@postgresql.org
*Sent:* Mon, October 25, 2010 11:56:27 PM
*Subject:* Re: [PERFORM] Postgres insert performance and storage
requirement compared to OracleOn Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <dpsmails@yahoo.com
<mailto:dpsmails@yahoo.com>> wrote:Hello Experts,
My application uses Oracle DB, and makes use of OCI interface.
I have been able to develop similar interface using postgreSQL library.
However, I have done some tests but results for PostgreSQL have not been
encouraging for a few of them.Tell us more about your tests and results please.
--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com
The Leader in Integrated Media Intelligence Solutions
On Mon, Oct 25, 2010 at 11:39:30AM -0700, Divakar Singh wrote:
Thanks Ray,
Already seen that, but it does not tell about storage requirement compared to
Oracle. I find it takes 2 times space than oracle.Best Regards,
Divakar
________________________________
From: Ray Stell <stellr@cns.vt.edu>
To: Divakar Singh <dpsmails@yahoo.com>
Sent: Tue, October 26, 2010 12:05:23 AM
Subject: Re: [PERFORM] Postgres insert performance and storage requirement
compared to OracleOn Mon, Oct 25, 2010 at 11:12:40AM -0700, Divakar Singh wrote:
2. What are the average storage requirements of postgres compared to Oracle? I
inserted upto 1 million records. The storage requirement of postgreSQL is
almostdouble than that of Oracle.
there's a fine manual:
http://www.postgresql.org/docs/9.0/interactive/storage.html
Maybe compare to oracle's storage documentation:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements001.htm#SQLRF30020
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/schema007.htm#ADMIN11622
I don't believe for a second the byte count is double in pg, but that's just
a religious expression, I've never counted.
Import Notes
Reply to msg id not found: 925479.22016.qm@web65411.mail.ac4.yahoo.com
On Mon, Oct 25, 2010 at 12:36 PM, Divakar Singh <dpsmails@yahoo.com> wrote:
Storage test was simple, but the data (seconds taken) for INSERT test for PG vs Oracle for 1, 2, 3,4 and 5 indexes was:
PG:
25
30
37
42
45Oracle:
33
43
50
65
68
Rows inserted: 100,000
Above results show good INSERT performance of PG when using SQL procedures. But performance when I use C++ lib is very bad. I did that test some time back so I do not have data for that right now.
So, assuming I wanted to reproduce your results, can you provide a
self contained test case that shows these differences? I have always
gotten really good performance using libpq myself, so I'm looking for
what it is you might be doing differently from me that would make it
so slow.
On October 25, 2010 11:36:24 am Divakar Singh wrote:
Above results show good INSERT performance of PG when using SQL procedures.
But performance when I use C++ lib is very bad. I did that test some time
back so I do not have data for that right now.
Wrap it in a transaction.
On Mon, Oct 25, 2010 at 2:12 PM, Divakar Singh <dpsmails@yahoo.com> wrote:
Hello Experts,
My application uses Oracle DB, and makes use of OCI interface.
I have been able to develop similar interface using postgreSQL library.
However, I have done some tests but results for PostgreSQL have not been
encouraging for a few of them.My questions/scenarios are:
1. How does PostgreSQL perform when inserting data into an indexed (type:
btree) table? Is it true that as you add the indexes on a table, the
performance deteriorates significantly whereas Oracle does not show that
much performance decrease. I have tried almost all postgreSQL performance
tips available. I want to have very good "insert" performance (with
indexes), "select" performance is not that important at this point of time.2. What are the average storage requirements of postgres compared to Oracle?
I inserted upto 1 million records. The storage requirement of postgreSQL is
almost double than that of Oracle.
u
Thanks in anticipation.
I ran the following tests w/libpqtypes. While you probably wont end
up using libpqtypes, it's illustrative to mention it because it's
generally the easiest way to get data into postgres and by far the
fastest (excluding 'COPY'). source code follows after the sig (I
banged it out quite quickly, it's messy!) :-). I am not seeing your
results.
via libpqtypes: Inserting, begin..insert..(repeat 1000000x) commit;
local workstation: 2m24s
remote server: 8m8s
via libpqtypes, but stacking array and unstacking on server (this
could be optimized further by using local prepare):
local workstation: 43s (io bound)
remote server: 29s (first million)
remote server: 29s (second million)
create index (1.8s) remote
remote server: 33s (third million, w/index)
obviously insert at a time tests are network bound. throw a couple of
indexes in there and you should see some degradation, but nothing too
terrible.
merlin
libpqtypes.esilo.com
ins1.c (insert at a time)
#include "libpq-fe.h"
#include "libpqtypes.h"
#define INS_COUNT 1000000
int main()
{
int i;
PGconn *conn = PQconnectdb("host=devdb dbname=postgres port=8071");
if(PQstatus(conn) != CONNECTION_OK)
{
printf("bad connection");
return -1;
}
PQtypesRegister(conn);
PQexec(conn, "begin");
for(i=0; i<INS_COUNT; i++)
{
PGint4 a=i;
PGtext b = "some_text";
PGtimestamp c;
PGbytea d;
d.len = 8;
d.data = b;
c.date.isbc = 0;
c.date.year = 2000;
c.date.mon = 0;
c.date.mday = 19;
c.time.hour = 10;
c.time.min = 41;
c.time.sec = 6;
c.time.usec = 0;
c.time.gmtoff = -18000;
PGresult *res = PQexecf(conn, "insert into ins_test(a,b,c,d)
values(%int4, %text, %timestamptz, %bytea)", a, b, &c, &d);
if(!res)
{
printf("got %s\n", PQgeterror());
return -1;
}
PQclear(res);
}
PQexec(conn, "commit");
PQfinish(conn);
}
ins2.c (array stack/unstack)
#include "libpq-fe.h"
#include "libpqtypes.h"
#define INS_COUNT 1000000
int main()
{
int i;
PGconn *conn = PQconnectdb("host=devdb dbname=postgres port=8071");
PGresult *res;
if(PQstatus(conn) != CONNECTION_OK)
{
printf("bad connection");
return -1;
}
PQtypesRegister(conn);
PGregisterType type = {"ins_test", NULL, NULL};
PQregisterComposites(conn, &type, 1);
PGparam *p = PQparamCreate(conn);
PGarray arr;
arr.param = PQparamCreate(conn);
arr.ndims = 0;
for(i=0; i<INS_COUNT; i++)
{
PGint4 a=i;
PGtext b = "some_text";
PGtimestamp c;
PGbytea d;
PGparam *i = PQparamCreate(conn);
d.len = 8;
d.data = b;
c.date.isbc = 0;
c.date.year = 2000;
c.date.mon = 0;
c.date.mday = 19;
c.time.hour = 10;
c.time.min = 41;
c.time.sec = 6;
c.time.usec = 0;
c.time.gmtoff = -18000;
PQputf(i, "%int4 %text %timestamptz %bytea", a, b, &c, &d);
PQputf(arr.param, "%ins_test", i);
}
if(!PQputf(p, "%ins_test[]", &arr))
{
printf("putf failed: %s\n", PQgeterror());
return -1;
}
res = PQparamExec(conn, p, "insert into ins_test select (unnest($1)).*", 1);
if(!res)
{
printf("got %s\n", PQgeterror());
return -1;
}
PQclear(res);
PQfinish(conn);
}
On Mon, Oct 25, 2010 at 4:28 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
I ran the following tests w/libpqtypes. While you probably wont end
up using libpqtypes, it's illustrative to mention it because it's
generally the easiest way to get data into postgres and by far the
fastest (excluding 'COPY'). source code follows after the sig (I
banged it out quite quickly, it's messy!) :-). I am not seeing your
results.
I had a really horrible bug in there -- leaking a param inside the
array push loop. cleaning it up dropped another 5 seconds or so from
the 4th million inserted to the remote server!. Using local prepare
(PQspecPrepare) prob another second or two could be shaved off.
PGparam *t = PQparamCreate(conn);
for(i=0; i<INS_COUNT; i++)
{
PGint4 a=i;
PGtext b = "some_text";
PGtimestamp c;
PGbytea d;
d.len = 8;
d.data = b;
c.date.isbc = 0;
c.date.year = 2000;
c.date.mon = 0;
c.date.mday = 19;
c.time.hour = 10;
c.time.min = 41;
c.time.sec = 6;
c.time.usec = 0;
c.time.gmtoff = -18000;
PQputf(t, "%int4 %text %timestamptz %bytea", a, b, &c, &d);
PQputf(arr.param, "%ins_test", t);
PQparamReset(t);
}
merlin
Hi Merlin,
Thanks for your quick input.
Well 1 difference worth mentioning:
I am inserting each row in a separate transaction, due to design of my program.
-Divakar
________________________________
From: Merlin Moncure <mmoncure@gmail.com>
To: Divakar Singh <dpsmails@yahoo.com>
Cc: pgsql-performance@postgresql.org
Sent: Tue, October 26, 2010 2:21:02 AM
Subject: Re: [PERFORM] Postgres insert performance and storage requirement
compared to Oracle
On Mon, Oct 25, 2010 at 4:28 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
I ran the following tests w/libpqtypes. While you probably wont end
up using libpqtypes, it's illustrative to mention it because it's
generally the easiest way to get data into postgres and by far the
fastest (excluding 'COPY'). source code follows after the sig (I
banged it out quite quickly, it's messy!) :-). I am not seeing your
results.
I had a really horrible bug in there -- leaking a param inside the
array push loop. cleaning it up dropped another 5 seconds or so from
the 4th million inserted to the remote server!. Using local prepare
(PQspecPrepare) prob another second or two could be shaved off.
PGparam *t = PQparamCreate(conn);
for(i=0; i<INS_COUNT; i++)
{
PGint4 a=i;
PGtext b = "some_text";
PGtimestamp c;
PGbytea d;
d.len = 8;
d.data = b;
c.date.isbc = 0;
c.date.year = 2000;
c.date.mon = 0;
c.date.mday = 19;
c.time.hour = 10;
c.time.min = 41;
c.time.sec = 6;
c.time.usec = 0;
c.time.gmtoff = -18000;
PQputf(t, "%int4 %text %timestamptz %bytea", a, b, &c, &d);
PQputf(arr.param, "%ins_test", t);
PQparamReset(t);
}
merlin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, Oct 26, 2010 at 7:44 AM, Divakar Singh <dpsmails@yahoo.com> wrote:
Hi Merlin,
Thanks for your quick input.
Well 1 difference worth mentioning:
I am inserting each row in a separate transaction, due to design of my
program.
Well, that right there is going to define your application
performance. You have basically three major issues -- postgresql
executes each query synchronously through the protocol, transaction
overhead, and i/o issues coming from per transaction sync. libpq
supports asynchronous queries, but only from the clients point of view
-- so that this only helps if you have non trivial work to do setting
up each query. The database is inherently capable of doing what you
want it to do...you may just have to rethink certain things if you
want to unlock the true power of postgres...
You have several broad areas of attack:
*) client side: use prepared queries (PQexecPrepared) possibly
asynchronously (PQsendPrepared). Reasonably you can expect 5-50%
speedup if not i/o bound
*) Stage data to a temp table: temp tables are not wal logged or
synced. Periodically they can be flushed to a permanent table.
Possible data loss
*) Relax sync policy (synchronous_commit/fsync) -- be advised these
settings are dangerous
*) Multiple client writers -- as long as you are not i/o bound, you
will see big improvements in tps from multiple clients
*) Stage/queue application data before inserting it -- requires
retooling application, but you can see orders of magnitude jump insert
performance
merlin
temp tables are not wal logged or
synced. Periodically they can be flushed to a permanent table.
What do you mean with "Periodically they can be flushed to
a permanent table"? Just doing
insert into tabb select * from temptable
or using a proper, per-temporary table command???
On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci <m_lists@yahoo.it> wrote:
temp tables are not wal logged or
synced. Periodically they can be flushed to a permanent table.What do you mean with "Periodically they can be flushed to
a permanent table"? Just doinginsert into tabb select * from temptable
yup, that's exactly what I mean -- this will give you more uniform
insert performance (your temp table doesn't even need indexes). Every
N records (say 10000) you send to permanent and truncate the temp
table. Obviously, this is more fragile approach so weigh the
pros/cons carefully.
merlin